Archive for the Digital Media Category

How the index card launched the information age

Posted in Digital Media, People in Media History, Print Media with tags , , , , , , , , , , , , , , , , , , , , on September 30, 2016 by multimediaman

library-card-catalogOne year ago this month, the final order of library catalog cards was printed by the Online Computer Library Center (OCLC) in Dublin, Ohio. On October 2, 2015, The Columbus Dispatch wrote, “Shortly before 3 p.m. Thursday, an era ended. About a dozen people gathered in a basement workroom to watch as a machine printed the final sheets of library catalog cards to be made …”

The fate of the printed library card, an indispensable indexing tool for more than a century, was inevitable in the age of electronic information and the Internet. It is safe to say that nearly all print with purely informational content—as opposed to items fulfilling a promotional or a packaging function—is surely to be replaced by online alternatives.

Founded in 1967, the OCLC is a global cooperative with 16,000 member libraries. Although it no longer prints library cards, the OCLC continues to fulfill its mission by providing shared lirary resources such as catalog metadata and WorldCat.org, an international online database of library collections.

Speaking about the end of the card catalog era, Skip Prichard the CEO of the OCLC said, “The vast majority of libraries discontinued their use of the printed library catalog card many years ago. … But it is worth noting that these cards served libraries and their patrons well for generations, and they provided an important step in the continuing evolution of libraries and information science.”

The 3 x 5 card

Printed library catalog card

Printed library catalog card

The library catalog card is one form of the popular 3 x 5 index card that served as a filing system for a multitude of purposes for over two hundred years. While many of us have been around long enough to have used or maybe even still use them—for addresses and phone numbers, recipes, flash cards or research paper outlines—we may not be aware of the relationship that index cards have to modern information science.

The original purpose of the index card and its subsequent development represented the early stages of information theory and practice. Additionally, as becomes clear below, without the index card as the first functional system for organizing complex categories, subcategories and cross-references, studies in the natural sciences would have never gotten off the ground.

The index card became the indispensable tool for both organizing and comprehending the expansion of human knowledge at every level. Along with several important intermediary steps, the ideas that began with index cards eventually led to relational databases, document management systems, hyperlinks and the World Wide Web.

Carl Linnaeus and natural science

carl-linnaeus

Carl Linnaeus

The Swedish naturalist and physician Carl Linnaeus (1707–1778) is recognized as the creator of the index card. Linnaeus used the cards to develop his system of organizing and naming the species of all living things. Linnaean taxonomy is based on a hierarchy (kingdom, phylum, class, order, family, genus, species) and binomial species naming (homo erectus, tyrannosaurus rex, etc.). He published the first edition of his universal conventions in a small pamphlet called “The System of Nature” in 1735.

Beginning in his early twenties, Linnaeus was interested in producing a series of books on all known species of plants and animals. At that time, there were so many new species being discovered that Linnaeus knew as soon as a book was printed, a large amount of new information would already be available. He wanted to quickly and accurately revise his publications to take into account the new findings in subsequent editions.

As time went on, Linnaeus developed different functional methods of sorting through and organizing enormous amounts of information connected with his growing collection of plant, animal and shell specimens (eventually it rose to 40,000 samples). His biggest problem was creating a process that was both structured enough to facilitate retrieval of previously collected information and flexible enough to allow rearrangement and addition of new information.

Pages from an early edition of Linnaeus’ “The System of Nature”

Pages from an early edition of Linnaeus’ “The System of Nature”

Working with paper notations in the eighteenth century, he needed a system that would allow the flow of names, references, descriptions and drawings into and out of a fixed sequence for the purposes of comparison and rearrangement. This “packing” and “unpacking” of information was a continuous process that enabled Linnaeus’ research to keep up with the changes in what was known about living species.

Linear vs non-linear methods

At first, Linnaeus used notebooks. This linear method—despite his best efforts to leave pages open for updates and new information—proved to be unworkable and wasteful. As estimates of how much room to allow often proved incorrect, Linnaeus was forced to squeeze new details into ever shrinking available space or he ended up with unutilized blank pages.

After thirty years of working with notebooks, Linnaeus began to experiment with a filing system of information recorded on separate sheets of paper. This was later converted to small sheets of thick paper that could be quickly handled, shuffled through and laid out on a table in two-dimensions like a deck of playing cards. This is how the index card was born.

a-stack-of-linnaeus-index-cards

A stack of Linnaeus’ hand written index cards

Linnaeus’ index card system was able to represent the variation of living organisms by showing multiple affinities in a map-like fashion. In order to accommodate the ever-expanding knowledge of new species—today the database of taxonomy contains 8.7 million items—Linnaeus created a breakthrough method for managing complex information.

Melvil Dewey and DDC

While index cards continued to be used in Europe, an important step forward in information management was made in the US by Melvil Dewey (1851-1931), the creator of the well-known Dewey Decimal System (or Dewey Decimal Classification, DDC). Used by libraries for the cataloging of books since 1876, the DDC was based on index cards and introduced the concepts of “relative location” and “relative index” to bibliography. It also enabled libraries to add books to their collection based on subject categories and an infinite number of decimal expressions known as “call numbers.”

The young Melvil Dewey

The young Melvil Dewey

Previous to the DDC, libraries attempted to assign books to a permanent physical location based on their order of acquisition. This linear approach proved unworkable, especially as library collections grew rapidly in the latter part of the nineteenth century. With industrialization, libraries were overflowing with paper: letters, reports, memos, pamphlets, operation manuals, schedules as well as books were flooding in and the methods of cataloging and storing these collections needed to find a means of keep up.

In the 1870s, while working at Amherst College Library, Melvil Dewey became involved with libraries across the country. He was a founding member of the American Library Association and became editor of the The Library Journal, a trade publication that still exists today. In 1878, Dewey published the first edition of “A Classification and Subject Index for Cataloguing and Arranging the Books and Pamphlets of a Library” that elaborated on the use of the library card catalog index.

Precursor to the information age

Title page of the first edition of Dewey’s bibliographic classification system

Title page of the first edition of Dewey’s bibliographic classification system

Like many others of his generation, Melvil Dewey was committed to scientific management, standardization and the democratic ideal. By the end of the nineteenth century the Dewey classification system and his 3 x 5 card catalog were being used in nearly every school and public library in the US. The basic concept was that any member of society could walk into a library anywhere in the country, go to the card catalog and be able to locate the information they were looking for.

In 1876 Dewey created a company called Library Bureau and began providing card catalog supplies, cabinets and equipment to libraries across the country. Following the enormous success of this business, Dewey expanded the Library Bureau’s information management services to government agencies and large corporations at the turn of the twentieth century.

In 1896, Dewey formed a partnership with Herman Hollerith and the Tabulating Machine Company (TMC) to provide the punch cards used for the electro-mechanical counting system of the US government census operations. Dewey’s relationship with Hollerith is significant as TMC would be renamed International Business Machines (IBM) in 1924 and become an important force in the information age and creator of the first relational database.

Paul Otlet and multidimensional indexing

Paul Otlet working in his office in the 1930s

Paul Otlet working in his office in the 1930s

While Dewey’s classification system became the standard in US libraries, others were working on bibliographic cataloging ideas, especially in Europe. In 1895, the Belgians Paul Otlet (1868-1944) and Henri La Fontaine founded the International Institute of Bibliography (IIB) and began working on something they called the Universal Bibliographic Repertory (UBR), an enormous catalog based on index cards. Funded by the Belgian government, the UBR involved the collection of books, articles, photographs and other documents in order to create a one-of-a-kind international index.

As described by Otlet, the ambition of the UBR was to build “an inventory of all that has been written at all times, in all languages, and on all subjects.” Although they used the DDC as a starting point, Otlet and La Fontaine found limitations in Dewey’s classification system while working on the UBR. Some of the issues were related to Dewey’s American perspective; the DDC lacked some categories needed for information related to other regions of the world.

A section of the Universal Bibliographic Repertory

A section of the Universal Bibliographic Repertory

More fundamentally, however, Otlet and La Fontaine made an important conceptual breakthrough over Dewey’s approach. In particular, they conceived of a complex multidimensional indexing system that would allow for more deeply defined subject categories and cross-referencing of related topics.

Their critique was based on Otlet’s pioneering idea that the content of bibliographic collections needed to be separated from their form and that a “universal” classification system needed to be created that included new media and information sources (magazines, photographs, scientific papers, audio recordings, etc.) and moved away from the exclusive focus on the location of books on library shelves.

Analog information links and search

After Otlet and La Fontaine received permission from Dewey to modify the DDC, they set about creating the Universal Decimal Classification (UDC). The UDC extended Dewey’s cataloging expressions to include symbols (equal sign, plus sign, colon, quotation marks and parenthesis) for the purpose of establishing “links” between multiple topics. This was a very significant breakthrough that reflected the enormous growth of information taking place at the end of the nineteenth century.

By 1900, the UBR had more than 3 million entries on index cards and was supported by more than 300 IIB members from dozens of countries. The project was so successful that Otlet began working on a plan to copy the UBR and distribute it to major cities around the world. However, with no effective method for reproducing the index cards, other than typing them out by hand, this project ran up against the technical limitations of the time.

henri-la-fontaine-with-staff-members-of-the-mundaneum

Henri La Fontaine and staff members at the Mundaneum in Mons, Belgium. At its peak in 1924, the catalog contained 18 million index cards.

In 1910, Otlet and La Fontaine shifted their attention to the establishment of the Mundaneum in Mons, Belgium. Again with government support, the aim of this institution was to bring together all of the world’s knowledge in a single UDC index. They created the gigantic repository as a service where anyone in the world could submit an inquiry on any topic for a fee. This analog search service would provide information back to the requester in the form of index cards copied from the Mundaneum’s bibliographic catalog.

By 1924, the Mundaneum contained 18 million index cards housed in 15,000 catalog drawers. Plagued by financial difficulties and a reduction of support from the Belgian government during the Depression and lead up to World War II, Paul Otlet realized that further management of the card catalog had become impractical. He began to consider more advanced technologies—such as photomechanical recording systems and even ideas for electronic information sharing—to fulfill his vision.

Although the Mundaneum was sacked by the Nazi’s in 1940 and most of the index cards destroyed, the ideas of Paul Otlet anticipated the technologies of the information age that were put into practice after the war. The pioneering work of others—such as Emanuel Goldberg, Vannevar Bush, Douglas Englebart and Ted Nelson—would lead to the creation of the Internet, World Wide Web and search engines in the second half of the twentieth century.

Advertisements

Steve Case and “The Third Wave” of the Internet

Posted in Digital Media, Internet, Mobile Media, People in Media History with tags , , , , , , , , , , on May 25, 2016 by multimediaman

Steve Case and The Third Wave

In 1980, Alvin Toffler published The Third Wave, a sequel to his 1970 best-seller Future Shock and an elaboration of his ideas about the information age and its stressful impact on society. In contrast to his first book, Toffler sought in The Third Wave to convince readers not to dread the future but instead to embrace the potential at the heart of the information revolution.

Alvin Toffler

Alvin Toffler

Actually, Alvin—and his co-author wife Heidi Toffler—were among the few writers to appreciate early on the transformative power of electronic communications. Long before the word “Internet” was used by anyone but a few engineers working for the US Department of Defense—and after reporting for Fortune magazine on foundational Third Wave companies like IBM, AT&T and Xerox—Toffler began to hypothesize about “information overload” and the disruptive force of networked data and communications upon manufacturing, business, government and the family.

"The Third Wave" (1980) by Alvin Toffler

“The Third Wave” (1980) by Alvin Toffler

For example, one can read in the The Third Wave, “Humanity faces a quantum leap forward. It faces the deepest social upheaval and creative restructuring of all time. Without clearly recognizing it, we are engaged in building a remarkable new civilization from the ground up. This is the meaning of the Third Wave.” Appearing today as a little excessive, these words would certainly have seemed in 1980 to be a wild exaggeration by two fanatical tech futurists.

But Alvin and Heidi were really onto something. More than 35 years later, who can deny the truth behind Toffler’s basic ideas about the global information revolution and its consequences? The Internet, networked PCs, the World Wide Web, wireless broadband, smartphones, social media and, ultimately, the Internet of Things have changed and are changing every aspect of society.

To his credit, Steve Case—who cofounded the early Internet company America Online—has written a new book called The Third Wave: An Entrepreneur’s Vision of the Future that borrows its title from Toffler’s pioneering work. As Case explains in the preface, he was motivated by Toffler’s theories as a college student because they “completely transformed the way I thought about the world—and what I imagined for the future.”

Steve Case’s The Third Wave

First Wave Internet companies

First Wave Internet companies

In Steve Case’s book, “The Third Wave” refers to three phases of Internet development as opposed to Toffler’s stages of civilization. For Case, the first wave was the construction of the “on ramps”—including AOL and others like Sprint, Apple and Microsoft—to the information superhighway. The second wave was about building on top of first wave infrastructure by companies like Google, Amazon, eBay, Facebook, Twitter and others that have developed “software as a service” (SAS).

Case’s Third Wave of the Internet is the promise of connecting everything to everything else, i.e. the rebuilding of entire sectors of the economy with “smart” technologies. While the ideas surrounding what he calls the Internet of Everything are not new—Case does not claim to have originated the concept—the new book does discuss important barriers to the realization of the Third Wave of Internet connectivity and how to overcome them.

Second Wave Internet companies

Second Wave Internet companies

Case argues that Third Wave companies will require a new set of principles in order to be successful, that following the playbook of Second Wave companies will not do. He writes, “The playbook they need, instead, is one that worked during the First Wave, when the Internet was still young and skepticism was still high; when the barriers to entry were enormous, and when partnerships were a necessity to reaching your customers; when the regulatory system was coming to grips with a new reality and struggling to figure out the appropriate path forward.”

In much of the book, Case reviews his ideas about the transformation of the health care, education and food industries by applying the culture of innovation and ambition for change that is commonly found in Silicon Valley. However, he cautions that current Second Wave models of venture capital investment, views about the role of government and aversion to collaboration among entrepreneurs threaten to stall or kill Third Wave change before it can get started.

The story of AOL

In some ways, the most interesting aspects of Case’s book deal with the origin, growth and decline of America Online (AOL). Case gives a candid explication of the trials and tribulations of his innovative dial-up Internet company from 1983 to 2003. Case explains that prior to the achievement of significant consumer (27.6 million users by 2002) and Wall Street ($222 billion market cap by 1999) success, AOL and its precursors went through a series of near death experiences.

Steve Case in 1987 before the founding of America Online

Steve Case in 1987 before the founding of America Online

For example, he tells the story of a deal that he signed with Apple in 1987 that was cancelled by the Cupertino-based company during implementation. Case had sold Apple customer service executives on a partnership with his then Quantum Computer Services to build an online support system called Apple Link Personal Edition that would be offered to customers as a software add-on. Disagreements between Apple and Quantum over how to sell the product to computer users ultimately killed the project.

Facing the termination of the investment funding that was tied to the $5 million agreement, Case and the other founders decided to sue Apple for breach of contract. Acknowledging their liability to Quantum, Apple agreed to pay $3 million to “tear up the contract.” Starting over with their new source of cash, Case and his partners restarted their company as America Online and they made an approach directly to consumers to sign up for their service.

This tale and others reinforces one of the key themes of Case’s book: Third Wave entrepreneurs will need to persevere through “the long slog” to success.

The January 24, 2000 cover of Time magazine with Steve Case and Jerry Levin announcing the AOL-Time Warner merger.

The January 24, 2000 cover of Time magazine with Steve Case and Jerry Levin announcing the AOL-Time Warner merger.

The end of Steve Case’s relationship with AOL is also a lesson in the leadership skills required for Third Wave success. In a chapter entitled “Matter of Trust” (the longest of the book), Steve Case relives the story of the merger/acquisition of Time Warner with/by AOL. It is a cautionary tale of both the excesses of Wall Street valuations during the dot com boom and the crisis of traditional media companies in the face of Third Wave innovation.

Case says that while the combination of AOL with Time Warner in 2000—the largest corporate merger in history up to that point—made sense at the time, two months later the dot com bubble burst and the company lost eighty percent of its value within a year. This was followed by a series of leadership battles that proved there were deep seated feelings of “personal mistrust and lingering resentments” among top Time Warner executives over the business potential of the Internet and the up-start start-up called AOL.

Steve Case writes that, although the dot com crash was certainly a factor, “It came down to emotions and egos and, ultimately, the culture itself. That something with the potential to be the first trillion-dollar company could end up losing $200 billion in value should tell you just how important the people factor is. It doesn’t really matter what the plan is if you can’t get your people aligned around achieving the same objectives.”

What now?

For those of us that were in the traditional media business—i.e. print, television and radio—the word “disruption” hardly describes the impact of the Internet over the past three decades. When companies like AOL were getting started with their modems and dial-up connections, most of us were looking pretty good. We had little time or interest in the tacky little AOL “You’ve Got Mail” audio message. As we reluctantly embraced IBM, Apple and Microsoft as partners in our front office and production operations, we were later making smug remarks about the absurdity of eBay and Amazon as legitimate business ideas.

Internet of Things

IoT is at the center of Case’s Third Wave of innovation.

Steve Case’s book represents a timely warning to the enterprises and business leaders of today who similarly dismiss the notions of IoT.  He points to Uber and Airbnb and shows that the hospitality and transportation industries are being right now turned on their sides by this new wave of information-enabled “sharing” businesses.

Actually, Case is an unlikely spokesman for the next wave of innovation having personally made out quite well (his net worth stands at $1.37 billion) despite the shipwreck that became AOL Time Warner. If he had been born twenty-five years later, Case could possibly have been another Mark Zuckerberg of Facebook and rode the Second Wave of the Internet (Zuckerberg got his start in coding by hacking AOL Instant Messenger) over the ruins of the dot com bust.

But that was then and this is now. Case has decided to commit himself to investment in present day entrepreneurships through his Revolution Growth venture capital fund. His book is kind of a roadmap for those who want to learn from his experience and bravely launch into the Third Wave of the Internet and build start-ups of a new kind. As Alvin Toffler wrote in Future Shock, “If we do not learn from history, we shall be compelled to relive it. True. But if we do not change the future, we shall be compelled to endure it. And that could be worse.”

Books, e-books and the e-paper chase

Posted in Digital Media, Mobile, Mobile Media, Paper, Print Media with tags , , , , , , , , , , , , , on March 22, 2016 by multimediaman

Last November Amazon opened its first retail book store in Seattle near the campus of the University of Washington. More than two decades after it pioneered online book sales—and initiated the e-commerce disruption of the retail industry—the $550 billion company seemed to be taking a step backward with its “brick and mortar” Amazon Books.

Amazon Books opened in Seattle on November 3, 2015

Amazon opened its first retail book store in Seattle on November 3, 2015

However, Amazon launched its store concept with a nod to traditional consumer shopping habits, i.e. the ability to “kick the tires.” Amazon knows very well that many customers like to browse the shelves in bookstores and fiddle with electronic gadgets like the Kindle, Fire TV and Echo before they make buying decisions.

So far, the Seattle book store has been successful and Amazon has plans to open more locations. Some unique features of the Amazon.com buying experience have been extended to the book store. Customer star ratings and reviews are posted near book displays; shoppers are encouraged to use the Amazon app and scan bar codes to check prices.

Amazon’s book store initiative was also possibly motivated by the persistence and strength of the print book market. Despite the rapid rise of e-books, print books have shown a resurgence of late. Following a sales decline of 15 million print books in 2013 to just above 500 million units, the past two years have seen an increase to 560 million in 2014 and 570 million in 2015. Meanwhile, the American Booksellers Association reported a substantial increase in independent bookstores over the past five years (1,712 member stores in 2,227 locations in 2015, up from 1,410 in 1,660 locations in 2010).

Print books and e-books

After rising rapidly since 2008, e-book sales have stabilized at between 25% and 30% of total book sales

After rising rapidly since 2008, e-book sales have stabilized at between 25% and 30% of total book sales

The ratio of e-book to print book sales appears to have leveled off at around 1 to 3. This relationship supports recent public perception surveys and learning studies that show the reading experience and information retention properties of print books are superior to that of e-books.

The reasons for the recent uptick in print sales and the slowing of e-book expansion are complex. Changes in the overall economy, adjustments to bookstore inventory from digital print technologies and the acclimation of consumers to the differences between the two media platforms have created a dynamic and rapidly shifting landscape.

As many analysts have insisted, it is difficult to make any hard and fast predictions about future trends of either segment of the book market. However, two things are clear: (1) the printed book will undergo little further evolution and (2) the e-book is headed for rapid and dramatic innovation.

Amazon launched the e-book revolution in 2007 with the first Kindle device. Although digital books were previously available in various computer file formats and media types like CD-ROMs for decades, e-books connected with Amazon’s Kindle took off in popularity beginning in 2008. The most important technical innovation of the Kindle—and a major factor in its success—was the implementation of the e-paper display.

Distinct from backlit LCD displays on most mobile devices and personal computers, e-paper displays are designed to mimic the appearance of ink on paper. Another important difference is that the energy requirements of e-paper devices are significantly lower than LCD-based systems. Even in later models that offer automatic back lighting for low-light reading conditions, e-paper devices will run for weeks on a single charge while most LCD systems require a recharge in less than 24-hours.

Nick Sheridon and Gyricon

The theory behind the Kindle’s ink-on-paper emulation was originated in the 1970s at the Xerox Palo Alto Research Center in California by Nick Sheridon. Sheridon developed his concepts while working to overcome limitations with the displays of the Xerox Alto, the first desktop computer. The early monitors could only be viewed in darkened office environments because of insufficient brightness and contrast.

Nick Sheridon and his team at Xerox PARC invented Gyricon in 1974, a thin layer of transparent plastic composed of bichromal beads that rotate to create an image

Nick Sheridon and his team at Xerox PARC invented Gyricon in 1974, a thin layer of transparent plastic composed of bichromal beads that rotate with changes in voltage to create an image on the surface

Sheridon sought to develop a display that could match the contrast and readability of black ink on white paper. Along with his team of engineers at Xerox, Sheridon developed Gyricon, a substrate with thousands of microscopic plastic beads—each of which were half black and half white—suspended in a thin and transparent silicon sheet. Changes in voltage polarity caused either the white or black side of the beads to rotate up and display images and text without backlighting or special ambient light conditions.

After Xerox cancelled the Alto project in the early 1980s, Sheridon took his Gyricon technology in a new direction. By the late 1980s, he was working on methods to manufacture a new digital display system as part of the “paperless office.” As Sheridon explained later, “There was a need for a paper-like electronic display—e-paper! It needed to have as many paper properties as possible, because ink on paper is the ‘perfect display.’”

In 2000, Gyricon LLC was founded as a subsidiary of Xerox to develop commercially viable e-paper products. The startup opened manufacturing facilities in Ann Arbor, Michigan and developed several products including e-signage that utilized Wi-Fi networking to remotely update messaging. Unfortunately, Xerox shut down the entity in 2005 due to financial problems.

Pioneer of e-paper Nick Sheridon

Pioneer of e-paper, Nicholas Sheridan

Among the challenges Gyricon faced were making a truly paper-like material that had sufficient contrast and resolution while keeping manufacturing costs low. Sheridan maintained that e-paper displays would only be viable economically if units were sold for less than $100 so that “nearly everyone could have one.”

As Sheridon explained in a 2009 interview: “The holy grail of e-paper will be embodied as a cylindrical tube, about 1 centimeter in diameter and 15 to 20 centimeters long, that a person can comfortably carry in his or her pocket. The tube will contain a tightly rolled sheet of e-paper that can be spooled out of a slit in the tube as a flat sheet, for reading, and stored again at the touch of a button. Information will be downloaded—there will be simple user interface—from an overhead satellite, a cell phone network, or an internal memory chip.”

E Ink

By the 1990s competitors began entering the e-paper market. E Ink, founded in 1998 by a group of scientists and engineers from MIT’s Media Lab including Russ Wilcox, developed a concept similar to Sheridon’s. Instead of using rotating beads with white and black hemispheres, E Ink introduced a method of suspending microencapsulated cells filled with both black and white particles in a thin transparent film. Electrical charges to the film caused the black or white particles to rise to the top of the microcapsules and create the appearance of a printed page.

E Ink cofounder Russ Wilcox

E Ink cofounder Russ Wilcox

E Ink’s e-paper technology was initially implemented by Sony in 2004 in the first commercially available e-reader called LIBRIe. In 2006, Motorola integrated an E Ink display in its F3 cellular phone. A year later, Amazon included E Ink’s 6-inch display in the first Amazon Kindle which became by far the most popular device of its kind.

Kindle Voyage (2014) and Kindle Paperwhite (2015) with the latest e-paper displays (Carta) from E ink

Kindle Voyage (2014) and Kindle Paperwhite (2015) with the latest e-paper displays (Carta) from E ink

Subsequent generations of Kindle devices have integrated E Ink displays with progressively improved contrast, resolution and energy consumption. By 2011, the third generation Kindle included touch screen capability (the original Kindle had an integrated hardware keyboard for input).

The current edition of the Kindle Paperwhite (3rd Generation) combines back lighting and a touch interface with E Ink Carta technology and a resolution of 300 pixels per inch. Many other e-readers such as the Barnes & Noble Nook, the Kobo, the Onyx Boox and the PocketBook also use E Ink products for their displays.

Historical parallel

The quest to replicate, as closely as possible in electronic form, the appearance of ink on paper is logical enough. In the absence of a practical and culturally established form, the new media naturally strives to emulate that which came before it. This process is reminiscent of the evolution of the first printed books. For many decades, print carried over the characteristics of the books that were hand-copied by scribes.

It is well known that Gutenberg’s “mechanized handwriting” invention (1440-50) sought to imitate the best works of the Medieval monks. The Gutenberg Bible, for instance, has two columns of print text while everything else about the volume—paper, size, ornamental drop caps, illustrations, gold leaf accents, binding, etc.—required techniques that preceded the invention of printing. Thus, the initial impact of Gutenberg’s system was an increase in the productivity of book duplication and the displacement of scribes; it would take some time for the implications of the new process to work its way through the function, form and content of books.

Ornamented title page of the Gutenberg Bible printed in 1451

Ornamented title page of the Gutenberg Bible printed in 1451

More than a half century later—following the spread of Gutenberg’s invention to the rest of Europe—the book began to evolve dramatically and take on attributes specific to printing and other changes taking place in society. For example, by the first decade of the 1500s, books were no longer stationary objects to be read in exclusive libraries and reading rooms of the privileged few. As their cost dropped, editions became more plentiful and literacy expanded, books were being read everywhere and by everybody.

By the middle 1500s, both the form and content of books became transformed. To facilitate their newfound portability, the size of books fell from the folio (14.5” x 20”) to the octavo dimension (7” x 10.5”). By the beginning of the next century, popular literature—the first European novel is widely recognized as Cervantes’ Don Quixote of 1605—supplanted verse and classic texts. New forms of print media developed such as chapbooks, broadsheets and newspapers.

Next generation e-paper

It seems clear that the dominance of LCD displays on computers, mobile and handheld devices is a factor in the persistent affinity of the public for print books. Much of the technology investment and advancement of the past decade—coming from companies such as Apple Computer—has been been committed to computer miniaturization, touch interface and mobility, not the transition from print to electronic media. While first decade e-readers have made important strides, most e-books are still being read on devices that are visually distant from print books, impeding a more substantial migration to the new media.

Additionally, most current e-paper devices have many unpaper-like characteristics such as relatively small size, inflexibility, limited bit-depth and the inability to write ton them. All current model e-paper Kindles, for example, are limited to 6-inch displays with 16 grey levels beneath a heavy and fragile layer of glass and no support for handwriting.

The Sony Digital Paper System (DPT-S1) is based on E Ink’s Mobius e-paper display technology: 13.3” format, flexible and supports stylus handwriting

The Sony Digital Paper System (DPT-S1) is based on E Ink’s Mobius e-paper display technology: 13.3” format, flexible and supports stylus handwriting

A new generation of e-paper systems is now being developed that overcome many of these limitations. In 2014, Sony released its Digital Paper System (DPT-S1) that is a letter-size e-reader and e-notebook (for $1,100 at launch and currently selling for $799). The DPT-S1 is based on E Ink’s Mobius display, a 13.3” thin film transistor (TFT) platform that is flexible and can accept handwriting from a stylus.

Since it does not have any glass, the new Sony device weighs 12.6 oz or about half of a similar LCD-based tablet. With the addition of stylus-based handwriting capability, the device functions like an electronic notepad and, meanwhile, notes can be written in the margins of e-books and other electronic documents.

These advancements and others show that e-paper is positioned for a renewed surge into things that have yet to be conceived. Once a flat surface can be curved or even folded and then made to transform itself into any image—including a color image—at any time and at very low cost and very low energy consumption, then many things are possible like e-wall paper, e-wrapping paper, e-milk cartons and e-price tags. The possibilities are enormous.

Streaming and the era of on-demand media

Posted in Audio, Digital Media, Video with tags , , , , , , , , , on January 16, 2016 by multimediaman

On January 6, Netflix went live with its video-streaming service in 130 new countries across the globe. The expansion—covering most of the world except for China—was announced by Netflix cofounder and CEO Reed Hastings during a keynote speech at the International Consumer Electronics Show in Las Vegas. Hastings said, “Today, right now, you are witnessing the birth of a global TV network.”

Reed Hastings, CEO of Netflix announcing the global expansion of the streaming video service on January 6

Reed Hastings, CEO of Netflix announcing the global expansion of the streaming video service on January 6

Prior to this latest announcement, Netflix had 40 million subscribers in the US and 20 million subscribers internationally in a total of 60 countries and available in 17 languages. According to Hastings, the company’s goal is to reach 200 countries by the end of 2016 and sign up 90 million US and 450 million worldwide subscribers.

The rapid expansion of Netflix is part of the transformation of TV program and movie viewing that has been underway for a decade or more. While “linear TV”— programming that is presented at specific times and on non-portable screens—is still popular, it is being rapidly overtaken by the new personalized, on-demand and mobile subscription services like Netflix.

According to Netflix, the growth of Internet TV is driven by (1) advancements in Internet reliability and performance, (2) time and place flexibility of on-demand viewing and (3) accelerating innovation of streaming video technology. A possible fourth driver of Netflix’s success is its subscription-based user model. Unlike previous on-demand solutions that often required consumers to purchase one at a time—or rent for a specified period of time—their own copies of movies and music, streaming media solutions like Netflix offer subscribers access to the entire content library without limitations for a monthly fee.

Streaming media

Popular video and music streaming services

Popular video and music streaming services

Streaming media refers to video or audio content that is transmitted in a compressed digital form over the Internet and played immediately, rather than being downloaded onto a computer hard drive or other storage media for later playback. Therefore, users do not need to wait for the entire media file to be sent before playing it; the media file is delivered in a continuous stream and can be watched or listened to as soon as the playing process is able to begin.

Media streaming originated with “elevator music” known as Muzak in the early 1950s. It was a service that transmitted music over electrical lines in retail stores and building lobbies. The first efforts to stream music and video on computers and digital networks ran up against the limitations of CPU performance, network bandwidth and data stream interruptions associated with “buffering.”

Attempts in the 1990s by Microsoft (Windows Media Player), Apple (QuickTime) and RealNetworks (RealPlayer) to develop streaming technologies on desktop computers made important breakthroughs. However, each of these solutions required proprietary file formats and media players that resulted in an unworkable system for users.

By the early 2000s, the adoption of broadband internet and improvements in CPU and data throughput along with efforts to create a single, unified format led to the adoption of Adobe Flash as a de facto standard for streaming media. By 2005, when the social media and video sharing service YouTube was established, Flash became the dominant streaming technology on the Internet. More recently—especially since 2011—HTML5 has advanced as an international standard on computers and mobile devices and it will eventually supplant Flash.

Music industry streaming revenue is growing fast and download revenue is falling

Music industry streaming revenue is growing fast and download revenue is falling

Streaming media has been transforming the music industry along side of TV and movies. While digital downloads still represent the largest percentage of music sales in the US, they are falling. Meanwhile, streaming music services like Pandora, Spotify and Apple Music have already overtaken physical CD sales and represent about one third of the industry’s income. Some analysts expect revenue from music streaming to surpass that of digital downloads in the near future.

Consumers and content

Streaming media has fundamentally shifted the relationship between consumers and entertainment content. During the era of broadcast radio (1920s) and television (1950s), consumers needed a “set” to receive the analog programs of radio stations and TV channels. Meanwhile, audience members had to be on front of their radio or TV—with “rabbit ears” antenna adjusted optimally—on a schedule set by the broadcasters. The cost of programming was paid for by commercial advertising and corporate sponsors.

In the cable and satellite era (1970s), consumers began paying for content with subscription fees and programming was “commercial free.” Along with home recording devices—at first analog magnetic tape systems like VCRs (1970s) and digital recording devices like DVRs (late 1990s)—came an important shift in viewing behavior. Consumers could do what is now called “time shifted viewing,” i.e. they could choose when they wanted to experience the recorded content. 

Vinyl records, magnetic tapes and optical recording formats preceded downloading and streaming

Vinyl records, magnetic tapes and optical recording formats preceded downloading and streaming

At first, music publishers mass produced and marketed analog audio recordings—records (1950s) and then audio tapes (1970s)—and consumers purchased and owned a library of recordings. These records and tapes could be enjoyed at any time and place as long as there was an audio system with a stereo turntable or cassette player available.

The same was true of mass produced CD audio (1980s) and DVD video (2000s) optical discs. While these digital formats improved portability and their quality did not deteriorate from repeated play—the way that analog magnetic and vinyl did—they required a new generation of optical devices. Portable CD (1980s) and DVD players (late 1990s) addressed this issue, but consumers still had to maintain a library of purchased titles.

With digital downloading of music and video over the Internet, content could finally be played anywhere and at anytime on portable digital players like iPods (2001) and notebook PCs. However, consumers were still required to purchase the titles they wanted to enjoy. Instead of owning bookshelves and cabinets full of CD and DVD jewel cases, downloaded electronic files had to be maintained on MP3 players, computer hard drives and digital media servers.

When Internet-based media streaming arrived alongside of mobile and wireless computing the real potential of time and place independent content viewing became a reality. Add to these the subscription model—with (potentially) the entire back catalog of recorded music, TV shows and movies available for a relatively small monthly fee—and consumers began flocking in large numbers to services like Netflix and Spotify.

Streaming media trends to watch 2016

Media industry analysts have been following the impact of these streaming content and technologies and some of their recent insights and trend analyses are below:

Streaming media device adoption in US households

Streaming media device adoption in US households with broadband Internet

  • Streaming devices:
    • Linear TV content still dominates US households. However, there are signs that streaming media devices such as Roku, Apple TV, Chromecast and Amazon Fire are rapidly shifting things. The adoption of these devices went from about 17% in 2014 to about 28% of US households with broadband internet in 2015 [Park Associates]
Nielsen on demand music streams

On-demand music streaming includes music videos

  • Streaming vs. downloading:
    • Online music streams doubled from 164.5 billion to 317 billions songs
    • Digital song sales dropped 12.5% from 1.1 billion to 964.8 million downloads
    • Digital album sales dropped 2.9% from 106.5 million to 103.3 million downloads [Nielsen 2015 Music Report]
Cable TV subscriptions have been declining with the rise of "cord cutting" and streaming media

Cable TV subscriptions have been declining with the rise of “cord cutting” and streaming media

  • Cable TV:
    • The cord-cutting trend—households that are ending their cable TV service—is accelerating. Total households with cable subscriptions fell from 83% in 2014 to under 80% in 2015 [Pacific Crest].
    • Scheduled “linear” TV fell and recorded “linear” TV was flat (or even increased slightly) from 2014 to 2015, while streamed on-demand video increased [Ericsson ConsumerLab].

While streaming audio and video are growing rapidly, traditional radio and TV still represent by far the largest percentages of consumer activity. Obviously, some of the cultural and behavior changes involved in streaming media run up against audience demographics: some older consumers are less likely to shift their habits while some younger consumers have had fewer or no “linear” experiences.

As the Ericsson ConsumerLab study shows, teenagers spend less than 20% of their TV viewing time watching a TV screen; the other 80% is spent in front of desktop and laptop computers, tablets and smartphones. Despite these differences, streaming content use is soaring and the era of “linear” media is rapidly coming to an end. Just like the relationship between eBooks and print books, the electronic alternative is expanding rapidly while the analog form persists and, in some ways, is stronger than ever. Nonetheless, the new era of time and place independent on-demand media is fast approaching.

Where is VR going and why you should follow it

Posted in Digital Media, Mobile Media, Social Media, Video with tags , , , , , , , , , , , , , , , , , , , , , , on November 15, 2015 by multimediaman
Promotional image for Oculus Rift VR headset

Promotional image for Oculus Rift VR headset

On November 2, video game maker Activision Blizzard Entertainment announced a $5.9 billion purchase of King Digital Entertainment, maker of the mobile app game Candy Crush Saga. Activision Blizzard owns popular titles like Call of Duty, World of Warcraft and Guitar Hero—with tens of millions sold—for play on game consoles and PCs. By comparison, King has more than 500 million worldwide users playing Candy Crush on TVs, computers and (mostly) mobile devices.

While it is not the largest-ever acquisition of a game company—Activision bought Blizzard in 2008 for $19 billion—the purchase shows how much the traditional gaming industry believes that future success will be tied to mobile and social media. Other recent acquisitions indicate how the latest in gaming hardware and software have become strategically important for the largest tech companies:

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014

  • September 2014: Microsoft acquired Mojang for $2.5 billion
    Mojang’s Minecraft game has 10 million users worldwide and an active developer community. The Lego-like Minecraft is popular on both Microsoft’s Xbox game console and Windows desktop and notebook PCs. In making the purchase, Microsoft CEO Satya Nadella said, “Gaming is a top activity spanning devices, from PCs and consoles to tablets and mobile, with billions of hours spent each year.”
  • August 2104: Amazon acquired Twitch for $970 million
    The massive online retailer has offered online video since 2006 and the purchase of Twitch—the online and live streaming game service—adds 45 million users to Amazon’s millions of Prime Video subscribers and FireTV (stick and set top box) owners. Amazon’s CEO Jeff Bezos said of the acquisition, “Broadcasting and watching gameplay is a global phenomenon and Twitch has built a platform that brings together tens of millions of people who watch billions of minutes of games each month.”
  • March 2014: Facebook acquired Oculus for $2 billion
    Facebook users take up approximately 20% of all the time that people spend online each day. The Facebook acquisition of Oculus—maker of virtual reality headsets—is an anticipation that social media will soon soon include an immersive experience as opposed to scrolling through rectangular displays on PCs and mobile devices. According to Facebook CEO Mark Zuckerberg, “Mobile is the platform of today, and now we’re also getting ready for the platforms of tomorrow. Oculus has the chance to create the most social platform ever, and change the way we work, play and communicate.”

The integration of gaming companies into the world’s largest software, e-commerce and social media corporations is further proof that media and technology convergence is a powerful force drawing many different industries together. As is clear from the three CEO quotes above, a race is on to see which company can offer a mix of products and services sufficient to dominate the number of hours per day the public spends consuming information, news and entertainment on their devices.

What is VR?

Among the most important current trends is the rapid growth and widespread adoption of virtual reality (VR). Formerly of interest to hobbyists and gaming enthusiasts, VR technologies are now moving into mainstream daily use.

A short definition of VR is a computer-simulated artificial world. More broadly, VR is an immersive multisensory, multimedia experience that duplicates the real world and enables users to interact with the virtual environment and with each other. In the most comprehensive VR environments, the sight, sound, touch and smell of the real world are replicated.

Current and most commonly used VR technologies include a stereoscopic headset—which tracks the movement of a viewer’s head in 3 dimensions—and surround sound headphones that add a spatial audio experience. Other technologies such as wired gloves and omnidirectional treadmills can provide tactile and force feedback that enhance the recreation of the virtual environment.

New York Times VR promtion

The New York Times’ VR promotion included a Google Cardboard viewer that was sent along with the printed newspaper to 1 million subscribers

Recent events have demonstrated that VR use is becoming more practical and accessible to the general public:

  • On October 13, in a partnership between CNN and NextVR, the presidential debate was broadcast in VR as a live stream and stored for later on demand viewing. The CNN experience made it possible for every viewer to watch the event as though they were present, including the ability to see other people in attendance and observe elements of the debate that were not visible to the TV audience. NextVR and the NBA also employed the same technology to broadcast the October 27 season opener between the Golden State Warriors and New Orleans Pelicans, the first-ever live VR sporting event.
  • On November 5, The New York Times launched a VR news initiative that included the free distribution of Google Cardboard viewers—a folded up cardboard VR headset that holds a smartphone—to 1 million newspaper subscribers. The Times’ innovation required users to download the NYTvr app to their smartphone in order to watch a series of short news films in VR.

Origins of VR

Virtual reality is the product of the convergence of theater, camera, television, science fiction and digital media technologies. The basic ideas of virtual reality go back more than two hundred years and coincide with the desire of artists, performers and educators to recreate scenes and historical events. In the early days this meant painting panoramic views, constructing dioramas and staging theatrical productions where viewers had a 360˚ visual surround experience.

In the late 19th century, hundreds of cycloramas were built—many of them depicting major battles of the Civil War—where viewers sat in the center of a circular theater as the timeline of the historical event moved and was recreated around them in sequence. In 1899, a Broadway dramatization of the novel Ben Hur employed live horses galloping straight toward the audience on treadmills as a backdrop revolved in the opposite direction creating the illusion of high speed. Dust clouds were employed to provide additional sensory elements.

Kromscop viewer invented by Frederic Eugene Ives at the beginning of the 20th century

Frederic Eugene Ives’ Kromscop viewer

Contemporary ideas about virtual reality are associated with 3-D photography and motion pictures of the early twentieth century. Experimentation with color stereoscopic photography began in the late 1800s and the first widely distributed 3-D images were of the 1906 San Francisco earthquake and taken by Frederic Eugene Ives. As with present day VR, Ives’ images required both a special camera and viewing device called the Kromskop in order to see 3-D effect.

1950s-era 3-D View-Master with reels

1950s-era 3-D View-Master with reels

3-D photography was expanded and won popular acceptance beginning in the late 1930s with the launch of the View-Master of Edwin Eugene Mayer. The virtual experience of the View-Master system was enhanced with the addition of sound in 1970. Mayer’s company was eventually purchased by toy maker Mattel and later by Fischer-Price and the product remained successful until the era of digital photography in the early 2000s.

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

Experiments with stereoscopic motion pictures were conducted in the late 1800s. The first practical application of a 3-D movie took place in 1922 using the Teleview system of Laurens Hammond (inventor of the Hammond Organ) with a rotating shutter viewing device attached to the armrest of the theater seats.

Prefiguring the present-day inexpensive VR headset, the so-called “golden era” of 3-D film began in the 1950s and included cardboard 3-D glasses. Moviegoers got their first introduction to 3-D with stereophonic sound in 1953 with the film House of Wax starring Vincent Price. The popular enthusiasm for 3-D was eventually overtaken by the practical difficulties associated with the need to project two separate film reels in perfect synchronization.

1950s 3-D glasses and a movie audience wearing them

1950s 3-D glasses and a movie audience wearing them

Subsequent waves of 3-D movies in the second half of the twentieth century—projected from a single film strip—were eventually displaced by the digital film and audio methods associated with the larger formats and Dolby Digital sound of Imax, Imax Dome, Omnimax and Imax 3D. Anyone who has experienced the latest in 3-D animated movies such as Avatar (2009) can attest to the mesmerizing impact of the immersive experience made possible by the latest in these movie theater techniques.

Computers and VR

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

It is widely acknowledged that the theoretical possibility of creating virtual experiences that “convince” all the senses of their “reality” began with the work of Ivan Sutherland at MIT in the 1960s. Sutherland invented in 1966 the first head-mounted display—nicknamed the “Sword of Damocles”—that was designed to immerse the viewer in a simulated 3-D environment. In a 1965 essay called “The Ultimate Display,” Sutherland wrote about how computers have the ability to construct a “mathematical wonderland” that “should serve as many senses as possible.”

With increases in the performance and memory capacity of computers along with the decrease in the size of microprocessors and display technologies, Sutherland’s vision began to take hold in the 1980s and 1990s. Advances in vector based CGI software, especially flight simulators created by government researchers for military aircraft and space exploration, brought the term “reality engine” into use. These systems, in turn, spawned notions of complete immersion in “cyberspace” where sight, sound and touch are dominated by computer system generated sensations.

The term “virtual reality” was popularized during these years by Jaron Lanier and his VPL Laboratory. With VR products such as the Data Glove, the Eye Phone and Audio Sphere, Lanier combined with game makers at Mattel to create the first virtual experiences with affordable consumer products, despite their still limited functionality.

By the end of the first decade of the new millennium, many of the core technologies of present-day VR systems were developed enough to make simulated experiences more convincing and easy to use. Computer animation technologies employed by Hollywood and video game companies pushed the creation of 3-D virtual worlds to new levels of “realness.”

An offshoot of VR, called augmented reality (AR), took advantage of high resolution camera technologies and allowed virtual objects to appear within the actual environment and enabled users to view and interact with them on computer desktop and mobile displays. AR solutions became popular with advertisers offering unique promotional opportunities that capitalized on the ubiquity of smartphones and tablets.

Expectations

Scene from the 2009 movie Avatar

A scene from the 2009 animated film “Avatar”

Aside from news, entertainment and advertising, there are big possibilities opening up for VR in many business disciplines. Some experts expect that VR will impact almost every industry in a manner similar to that of PCs and mobile devices. Entrepreneurs and investors are creating VR companies with the aim of exploiting the promise of the new technology in education, health care, real estate, transportation, tourism, engineering, architecture and corporate communications (to name just a few).

Like consumer-level artificial intelligence, i.e. Apple Siri and Amazon Echo, present-day virtual reality technologies tend to fall frustratingly short of expectations. However, with the rapid evolution of core technologies—processors, software, video displays, sound, miniaturization and haptic feedback systems—it is conceivable that VR is ripe for a significant leap in the near future.

In many ways, VR is the ultimate product of media convergence as it is the intersection of multiple and seemingly unrelated paths of scientific development. As pointed out by Howard Rheingold in his authoritative 1991 book Virtual Reality, “The convergent nature of VR technology is one reason why it has the potential to develop very quickly from a scientific oddity into a new way of life … there is a significant chance that the deep cultural changes suggested here could happen faster than anyone has predicted.”

Hermann Zapf (1918–2015): Digital typography

Posted in Digital Media, People in Media History, Phototypesetting, Typography with tags , , , , , , , , , , , , , on September 30, 2015 by multimediaman
Hermann Zapf: November 8, 1918 – June 4, 2015

Hermann Zapf: November 8, 1918 – June 4, 2015

On Friday, June 12, Apple released its San Francisco system font for OSX, iOS and watchOS. Largely overlooked amid the media coverage of other Apple product announcements, the introduction of San Francisco was a noteworthy technical event.

San Francisco is a neo-grotesk, sans serif and Pan European typeface with characters in Latin as well as Cyrillic and Greek scripts. It is significant because it is the first font to be designed specifically for all of Apple’s display technologies. Important variations have been introduced into San Francisco to optimize its readability on Apple desktop, notebook, TV, mobile and watch devices.

It is also the first font designed by Apple in two decades. San Francisco extends Apple’s association with typographic innovation that began in the mid-1980s with desktop publishing. From a broader historical perspective, Apple’s new font confirms of the ideas developed more than fifty years ago by renowned calligrapher and type designer Hermann Zapf. Sadly, Zapf died at the age of 96 on June 4, 2015 just one week before Apple’s San Francisco announcement.

Hermann Zapf’s contributions to typography are extensive and astonishing. He designed more than 200 typefaces—the popular Palatino (1948), Optima (1952), Zapf Dingbats (1978) and Zapf Chancery (1979) among them—including fonts in Arabic, Pan-Nigerian, Sequoia and Cherokee. Meanwhile, Zapf’s exceptional calligraphic skills were such that he famously penned the Preamble of the Charter of the United Nations in four languages for the New York Pierpont Morgan Library in 1960.

Preamble of the charter of The United Nations

Zapf’s calligraphic skills were called upon for the republication of the Preamble of the UN Charter in 1960 for the Pierpont Morgan Library in New York City.

While he made many extraordinary creative accomplishments—far too many to list here—Hermann Zapf’s greatest legacy is the way he thought about type and its relationship to technology as a whole. Herman Zapf was among the first and perhaps the most important typographers to theorize about the need for new forms of type driven by computer and digital technologies.

Early life

Hermann Zapf was born in Nuremburg on November 8, 1918 during the turbulent times at the end of World War I. As he wrote later in life, “On the day I was born, a workers’ and soldiers’ council took political control of the city. Munich and Berlin were rocked by revolution. The war ended, and the Republic was declared in Berlin on 9 November 1918. The next day Kaiser Wilhelm fled to Holland.”

At school, Hermann took an interest in technical subjects. He spent time in the library reading scientific journals and at home, along with his older brother, experimenting with electronics. He also tried hand lettering and created his own alphabets.

Hermann left school in 1933 with the intention of becoming an engineer. However, economic crisis and upheaval in Germany—including the temporary political detention of his father in March 1933 at the prison camp in Dachau—prevented him from pursuing his plans.

Apprentice years

Barred from attending the Ohm Technical Institute in Nuremberg for political reasons, Hermann sought an apprenticeship in lithography. He was hired in February 1934 to a four-year apprenticeship as a photo retoucher by Karl Ulrich and Company.

In 1935, after reading books by Rudolf Koch and Edward Johnson on lettering and illuminating techniques, Hermann taught himself calligraphy. When management saw the quality of Hermann’s lettering, the Ulrich firm began to assign him work outside of his retouching apprenticeship.

Hermann refused to take the test at his father’s insistence on the grounds that the training had been interrupted by many unrelated tasks. He never received his journeyman’s certificate and left Nuremburg for Frankfurt to find work.

Zapf’s Gilgengart designed originally in 1938

Zapf’s Gilgengart designed originally in 1938

Zapf started his career in type design at the age of 20 after he was employed at the Fürsteneck Workshop House, a printing establishment run by Paul Koch, the son of Rudolf Koch. As he later explained, “It was through the print historian Gustav Mori that I first came into contact with the D. Stempel AG type foundry and Linotype GmbH in Frankfurt. It was for them that I designed my first printed type in 1938, a fraktur type called ‘Gilgengart’.”

War years

Hermann Zapf was conscripted in 1939 and called up to serve in the German army near the town of Pirmasens on the French border. After a few weeks, he developed heart trouble and was transferred from the hard labor of shovel work to the writing room where he composed camp reports and certificates.

When World War II started, Hermann was dismissed for health reasons. In April 1942 he was called up again, this time for the artillery. Hermann was quickly reassigned to the cartographic unit where he became well-known for his exceptional map drawing skills. He was the youngest cartographer in the German army through the end of the war.

An example of calligraphy from the sketchbook that Hermann Zapf kept during World War II.

An example of calligraphy from the sketchbook that Hermann Zapf kept during World War II.

Zapf was captured after the war by the French and held in a field hospital in Tübingen. As he recounted, “I was treated very well and they even let me keep my drawing instruments. They had a great deal of respect for me as an ‘artiste’ … Since I was in very poor health, the French sent me home just four weeks after the end of the war. I first went back to my parents in my home town of Nuremberg, which had suffered terrible damage.”

Post-war years

In the years following the war, Hermann taught and gave lessons in calligraphy in Nuremberg. In 1947, he returned to Frankfurt and took a position with the Stempel AG foundry with little qualification other than his sketch books from the war years.

From 1948 to 1950, while he worked at Stempel on typography designs for metal punch cutting, he developed a specialization in book design. Hermann also continued to teach calligraphy twice a week at the Arts and Crafts School in Offenbach.

Zapf’s Palatino (1948) and Optima (1952) fonts

Zapf’s Palatino (1948) and Optima (1952) fonts

It was during these years, that Zapf designed Palatino and Optima. Working closely with the punch cutter August Rosenberg, Hermann design Palatino and named it after the 16th century Italian master of calligraphy Giambattista Palatino. In the Palatino face, Zapf attempted to emulate the forms of the great humanist typographers of the Renaissance.

Optima, on the other hand, expressed more directly the genius of Zapf’s vision and foreshadowed his later contributions. Optima can be described as a hybrid serif-and-sans serif typeface because it blends features of both: serif-less thick and thin strokes with subtle swelling at the terminals that suggest serifs. Zapf designed Optima during a visit to Italy in 1950 when he examined inscriptions at the Basilica di Santa Croce in Florence. It is remarkably modern, yet clearly derived from the Roman monumental capital model.

By the time Optima was released commercially by Stempel AG in 1958, the industry had begun to move away from metal casting methods and into phototypesetting. As many of his most successful fonts were reworked for the new methods, Zapf recognized—perhaps before and more profoundly than most—that phototypesetting was a transitional technology on the path from analog to an entirely new digital typography.

Digital typography

To grasp the significance of Zapf’s work, it is important to understand that, although “cold” photo type was an advance over “hot” metal type, both are analog technologies, i.e. they require the transfer of “master” shapes from manually engraved punches or hand drawn outlines to final production type by way of molds or photomechanical processes.

Due to the inherent limitations of metal and photomechanical media, analog type masters often contain design compromises. Additionally, the reproduction from one master generation to the next has variations and inconsistencies connected with the craftsmanship of punch cutting or outline drawing.

With digital type, the character shapes exist as electronic files that “describe” fonts in mathematical vector outlines or in raster images plotted on an XY coordinate grid. With computer font data, typefaces have many nuances and features that could never be rendered in metal or photo type. Meanwhile, digital font masters can be copied precisely without any quality degradation from one generation to the next.

Hermann Zapf in 1960

Hermann Zapf in 1960

From the earliest days of computers, Hermann Zapf began advocating for the advancement of digital typography. He argued that type designers needed to take advantage of the possibilities opened up by the new technologies and needed to create types that reflected the age. Zapf also combined knowledge of the rules of good type design with a recognition that fonts needed to be created specifically for electronic displays (at that time CRT-based monitors and televisions).

In 1959, at the age of 41, Zapf wrote in an industry journal, “It is necessary to combine the purpose, the simplicity and the beauty of the types, created as an expression of contemporary industrial society, into one harmonious whole. We should not seek this expression in imitations of the Middle Ages or in revivals of nineteenth century material., as sometimes seems the trend; the question for us is satisfying tomorrow’s requirements and creating types that are a real expression of our time but also represent a logical continuation of the typographic tradition of the western world.”

Warm reception in the US

 Despite a very cold response in Germany—his ideas about computerized type were rejected as “unrealistic” by the Technical University in Darmstadt where he was a lecturer and by leading printing industry representatives—Hermann persevered. Beginning in the early 1960s, Zapf delivered a series of lectures in the US that were met with enthusiasm.

For example, a talk he delivered at Harvard University in October 1964 became so popular that it led to an offer for a professorship at the University of Texas in Austin. The governor even also made Hermann an “Honorary Citizen of the State of Texas.” In the end, Zapf turned down the opportunity due to family obligations in Germany.

Among his many digital accomplishments are the following:

  • Rudolf Hell

    Rudolf Hell

    When digital typography was born in 1964 with the Digiset system of Rudolf Hell, Hermann Zapf was involved. By the early 1970s, Zapf created some of the first fonts designed specifically for any digital system: Marconi, Edison, and Aurelia.

  • In 1976, Hermann was asked to head a professorship in typographic computer programming at Rochester Institute of Technology (RIT) in Rochester, New York, the first of its kind in the world. Zapf taught at RIT for ten years and was able to develop his conceptions in collaboration with computer scientists and representatives of IBM and Xerox.
  • With Aaron Burns

    With Aaron Burns

    In 1977, Zapf partnered with graphic designers Herb Lubalin and Aaron Burns and founded Design Processing International, Inc. (DPI) in New York City. The firm developed software with menu-driven typesetting features that could be used by non-professionals. The DPI software was focused on automating hyphenation and justification as opposed to the style of type design.

  • In 1979, Hermann began a collaboration with Professor Donald Knuth of Stanford University to develop a font that was adaptable for mathematical formulae and symbols.
  • With Peter Karnow

    With Peter Karnow

    In the 1990s, Hermann Zapf continued to focus on the development of professional typesetting algorithms with his “hz -program” in collaboration with Peter Karow of the font company URW. Eventually the Zapf composition engine was incorporated by Adobe Systems into the InDesign desktop publishing software.

Zapf’s legacy

Hermann Zapf actively participated—into his 70s and 80s—in some of the most important developments in type technology of the past fifty years. This was no accident. He possessed both a deep knowledge of the techniques and forms of type history and a unique appreciation for the impact of information technologies on the creation and consumption of the written word.

In 1971, Zapf gave a lecture in Stockholm called “The Electronic Screen and the Book” where he said, “The problem of legibility is as old as the alphabet, for the identification of a letterform is the basis of its practical use. … To produce a clear, readable text that is pleasing to the eye and well arranged has been the primary goal of typography in all the past centuries. With a text made visible on a CRT screen, new factors for legibility are created.”

More than 40 years before the Apple design team set out to create a font that is legible on multiple computer screens, the typography visionary Hermann Zapf was theorizing about the very same questions.

AI and the future of information

Posted in Digital Media, Mobile, Social Media with tags , , , , , , , , , on June 30, 2015 by multimediaman
Amazon Echo intelligent home assistant

Amazon Echo intelligent home assistant

Last November, Amazon revealed its intelligent home assistant called Echo. The black cylinder-shaped device is always on and ready for your voice commands. It can play music, read audio books and it is connected to Alexa, Amazon’s cloud-based information service. Alexa can answer any number of questions regarding the weather, news, sports scores, traffic reports and your schedule in a human-like voice.

Echo has an array of seven microphones and it can hear—and also learn—your voice, speech pattern and vocabulary even from across the room. With additional plugins, Echo can control your automated home devices like lights, thermostat, kitchen appliances, security system and more with just the sound of your voice. This is certainly a major leap from “Clap on, Clap off” (watch “The Clapper” video from the mid-1980s here: https://www.youtube.com/watch?v=Ny8-G8EoWOw).

As many critics have pointed out, the Echo is Amazon’s response to Siri, Apple’s voice-activate intelligent personal assistant and knowledge navigator. Siri was launched as an integrated feature of the iPhone 4S in October 2011 and the iPad released in May 2012. Siri is also now part of the Apple Watch, a wearable device, that adds haptics—tactile feedback—and voice recognition along with a digital crown control knob to the human computer interface (HCI).

If you have tried to use any of these technologies, you know that they are far from perfect. As the New York Times reviewer, Farhad Manjoo explained, “If Alexa were a human assistant, you’d fire her, if not have her committed.” Often times, using any of the modern artificial intelligence (AI) systems can be an exercise in futility. However, it is important to recognize that computer interaction has come a long way since the transition from mainframe consoles and command line interfaces were replaced by the graphical, point and click interaction of the desktop.

What is artificial intelligence?

The pioneers of artificial intelligence theory: Alan Turing, John McCarthy, Marvin Minsky and Ray Kurzweil

The pioneers of artificial intelligence theory: Alan Turing, John McCarthy, Marvin Minsky and Ray Kurzweil

Artificial intelligence is the simulation of the functions of the human brain—such as visual perception, speech recognition, decision-making, and translation between languages—by man-made machines, especially computers. The field was started by the noted computer scientist Alan Turing shortly after WWII and the term was coined in 1956 by John McCarthy, a cognitive and computer scientist and Stanford University professor. McCarthy developed one of the first programming languages called LISP in the late 1950s and is recognized for having been an early proponent of the idea that computer services should be provided as a utility.

McCarthy worked with Marvin Minsky at MIT in the late 1950s and early 1960s and together they founded what has become known as the MIT Computer Science and Artificial Intelligence Laboratory. Minsky, a leading AI theorist and cognitive scientist, put forward a range of ideas and theories to explain how language, memory, learning and consciousness work.

The core of Minsky’s theory—what he called the society of mind—is that human intelligence is a vast complex of very simple processes that can be individually replicated by computers. In his 1986 book The Society of Mind Minsky wrote, “What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.”

The theory, science and technology of artificial intelligence have been advancing rapidly with the development of microprocessors and the personal computer. These advancements have also been aided by the growth in understanding of the functions of the human brain. The field of neuroscience has vastly expanded in recent decades our knowledge of the parts of the brain, especially the neocortex and its role in the transition from sensory perceptions to thought and reasoning.

Ray Kurzweil has been a leading theoretician of AI since the 1980s and has pioneered the development of devices for text-to-speech, speech recognition, optical character recognition and music synthesizers (Kurzweil K250). He sees the development of AI as a necessary outcome of computer technology and has written widely—The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999), The Singularity is Near (2005) and How to Create a Mind (2012)—that this is a natural extension of the biological capacities of the human mind.

Kurzweil, who corresponded as a New York City high school student with Marvin Minksy, has postulated that artificial intelligence can solve many of society’s problems. Kurzweil believes—based on the exponential growth rate of computing power, processor speed and memory capacity—that humanity is rapidly approaching a “singularity” in which machine intelligence will be infinitely more powerful than all human intelligence combined. He predicts that this transformation will occur in 2029; a moment in time when developments in computer technology, genetics, nanotechnology, robotics and artificial intelligence will transform the minds and bodies of humans in ways that cannot currently be comprehended.

Some fear that the ideas of Kurzweil and his fellow adherents of transhumanism represent an existential threat to society and mankind. These opponents—among them the physicist Stephen Hawking and the pioneer of electric cars and private spaceflight Elon Musk—argue that artificial intelligence will become the biggest “blow back” in history such as depicted in Kubrick’s film 2001: A Space Odyssey.

While much of this discussion remains speculative, anyone who watched in 2011 as the IBM supercomputer Watson defeated two very successful Jeopardy! champions (Ken Jennings and Brad Rutter) knows that AI has already advanced a long way. Unlike the human contestants, Watson was able to commit 200 million pages of structured and unstructured content, including the full text of Wikipedia, into four terabytes of its memory.

Media and interface obsolescence

Today, the advantages of artificial intelligence are available to great numbers of people in the form of personal assistants like Echo and Siri. Even with their limitations, these tools allow instant access to information almost anywhere and anytime with a series of simple voice commands. When combined with mobile, wearable and cloud computing, AI is making all previous forms of information access and retrieval—analog and digital alike—obsolete.

There was a time not that long ago when gathering important information required a trip—with pen and paper in hand—to the library or to the family encyclopedia in the den, living room or study. Can you think of the last time you picked up a printed dictionary? The last complete edition of the Oxford English Dictionary—all 20 volumes—was printed in 1989. Anyone born after 1993 is likely to have never seen an encyclopedia (the last edition of the Encyclopedia Britannica was printed in 2010). Further still, GPS technologies have driven most printed maps into bottom drawers and the library archives.

Instant messaging vs email communications

Among teenagers, instant messaging has overtaken email as the primary form of electronic communications

But that is not all.  The technology convergence embodied in artificial intelligence is making even more recent information and communication media forms relics of the past. Optical discs have all but disappeared from computers and the TV viewing experience as cloud storage and time-shifted streaming video have become dominant. Social media (especially photo apps) and instant messaging have also made email a legacy form of communication for an entire generation of young people.

Meanwhile, the advance of the touch/gesture interface is rapidly replacing the mouse and, with improvements in speech-to-text technology, is it not easy to visualize the disappearance of the QWERTY keyboard (a relic from the mechanical limitations of the 19th century typewriter)? Even the desktop computer display is in for replacement by cameras and projectors that can make any surface an interactive workspace.

In his epilogue to How to Create a Mind, Ray Kurzweil writes, “I already consider the devices I use and the cloud computing resources to which they are virtually connected as extensions of myself, and feel less than complete if I am cut off from these brain extenders.” While some degree of skepticism is justified toward Kurzweil’s transhumanist theories as a form of technological utopianism, there is no question that artificial intelligence is a reality and that it will be with us—increasingly integrated into us and as an extension of us—for now and evermore.