Charles Stanhope (1753–1816): Iron printing press

Posted in People in Media History, Print Media with tags , , , , , , , , , , on February 25, 2016 by multimediaman
Charles Stanhope, 3rd Earl Stanhope: August 3, 1753–December 15, 1816

Charles Stanhope, 3rd Earl Stanhope: August 3, 1753–December 15, 1816

Historians generally agree that the first industrial revolution took place between 1760 and 1840. Among the features of the great economic and social transformation were: (1) the progression from predominantly rural to urban society, (2) the replacement of handicraft with machine production, (3) the introduction of iron and steel in place of wood and (4) the substitution of muscle power with new energy sources like coal-fired steam power.

A unique set of circumstances—a stable commercial environment, advances in iron making and an abundance of skilled mechanics—made Britain the birthplace of the industrial revolution. Beginning with new techniques in textile production, industrial innovations spread rapidly to other manufacturing sectors and then across national borders in Europe and around the globe. All aspects of life would be touched by industrialization: population, politics, trade and commerce, science and culture, education, transportation and communication.

It was during this era of remarkable change that the English aristocrat Charles Stanhope invented—sometime around 1800—the first printing press constructed wholly of iron. Prior to Stanhope’s achievement, the design and build of printing machines had not changed in the three and a half centuries since Gutenberg.

Previously, small adjustments had been made to the wooden press. These related to structural stability, increased sheet size and automation to reduce human muscle power. But, even with the inclusion of some iron parts, the basic design of printing presses remained as they were in 1450.

With the Stanhope hand press, both the design of the impression mechanism as well as the material from which the machine was built were transformed; Stanhope’s contribution was a crucial preliminary step in the industrial development of print communications.

Young Lord Stanhope

Charles Stanhope, third Earl Stanhope, was born on August 3, 1753, the younger of two sons of Philip Stanhope, second Earl Stanhope, and his wife Lady Grisel (Hamilton) Stanhope. As a member of the English peerage system—with titles like Duke, Earl and Baron—Charles is often referred to as Lord Stanhope or Earl Stanhope. Born into the English aristocracy, he was afforded a privileged upbringing and, at the age of nine, was enrolled by his parents at prestigious Eton boarding school.

Portrait of the young Lord Stanhope

Portrait of the young Lord Stanhope

In 1763, following the death at age seventeen of his brother Philip from tuberculosis, Charles became family heir. His parents decided that Charles’ “health should not be exposed to the English climate, or the care of his mind to the capricious attention of the English schoolmaster” and the family relocated to Geneva, Switzerland. At age eleven, he was enrolled at the school in Geneva founded on the principles of John Calvin and there studied philosophy, science and math.

As a teenager, Charles was known to be a devoted cricket player, an exceptional equestrian and a well mannered young man who was admired by his peers. At age seventeen, Charles won a prize in a Swedish competition for the best essay, written in French, on the construction of a pendulum.

While Charles was accomplished academically in math and science, he was also known to have talents in drawing and painting. As a nobleman, Charles had obligations as a militia commander and he developed a passion for archery and musket shooting. At eighteen, he won a competition and was crowned the best shot and so-called “King of the Arquebusiers.”

By the time Charles completed his education in Switzerland, his parents decided to move the family back to England. According to a published account, as the family and its entourage left Geneva in 1774, “The young gentleman was obliged to come out again and again to his old friends and companions who pressed round the coach to bid him farewell, and expressed their sorrow for his departure and their wishes for his prosperity.”

Stanhope the inventor

During their five-month journey home to England from Switzerland, the family made a stop in Paris. Charles was welcomed and “esteemed by most of the learned educated men of the capital” over the prize he had won for his paper on pendulum design. He was developing an international reputation as an innovator.

Upon his return to England, Charles used his skills in mechanics to win election to London’s Royal Society, a world renowned club founded by King Charles in the 17th century to promote the benefits and accomplishments of science. At the age of 20, Charles embarked on a series of self-funded experiments and inventions and his interest in such matters continued throughout his life.

The first of two calculating machines invented by Charles Stanhope. His "arithmetical machines" have been recognized as precursors to the computer.

The first of two calculating machines invented by Charles Stanhope. His “arithmetical machines” have been recognized as precursors to the computer.

The most important of these were:

  • A method for preventing counterfeiting of gold currency (1775)
  • A system for fireproofing houses by starving a fire of air (1778)
  • Several mechanical “arithmetical machines” that could add, subtract, multiply and divide. These inventions were early forerunners of computers (1777 and 1780).
  • Experiments in steamboat navigation and ship construction which included the invention of the split pin, later known as the Cottier pin (1789).
  • A popular single lens microscope that became known as the Stanhope that was used in medical practice and for examination of transparent materials such as crystals and fluids (1806).
  • A monochord or a single string device, used for tuning musical instruments
  • Improvements in canal locks and inland navigation (1806)

Charles Stanhope became so well accomplished in international scientific circles that he was befriended by Benjamin Franklin. The two spent time together during Franklin’s visits to England prior to the American Revolution. They shared a mutual interest in electricity and, in 1779, Charles Stanhope published a volume entitled “Principles of Electricity” that corroborated through experimental evidence Franklin’s ideas about lighting rods.

The Stanhope press

By 1800, as has often happened in graphic arts history, the environment became ripe for a major step forward in printing methods. Charles Stanhope—who had the desire, know-how and resources to make it happen—stepped forward with a significant breakthrough.

Due to his many democratic political pursuits and scientific publishing activities—some of which concerned freedom of the press—Charles was very familiar with printing technology. Among his concerns were the cost of production, the accuracy of the content, the beauty of the print quality and the importance of books for the expansion of knowledge in society as a whole.

A drawing of the original Stanhope press design. None of these are known to exist today.

A drawing of the original Stanhope press design. None of these are known to exist today.

All letterpress technologies require a means to transfer ink from the surface of the metal type forms to the paper. This process requires the application of pressure, i.e. an impression, that mechanically drives the ink into the paper fibers. The pressure also creates a slight indentation in the shape of the letter forms in the surface of the paper.

Prior to 1800, press designs were based on the screw press that had been used for pressing grapes (wine) and olives (oil), cloth and paper going back to Roman times. The screw mechanism is a complex arrangement of the screw, nut, spindle and fixed bar that drives the platen—the flat plate that presses the paper against the type form—downward. There are many historical drawings and engravings that illustrate how physical strength is required to pull the bar and make a printing impression with the Gutenberg era press design.

Stanhope’s innovation, according to historian James Moran, was that “he retained the conventional screw but separated it from the spindle and bar, inserting a system of compound levers between them. The effect of several levers acting upon another is to multiply considerably the power applied.” The compound lever system was so successful that it became referred to as “Stanhope principles” and was incorporated into subsequent generations of hand press design in the nineteenth century (Columbian, Albion and Washington).

The second and more common design of the Stanhope hand press. Note the separation of the lever system from the screw and platen mechanism.

The second and more common design of the Stanhope hand press. Note the separation of the lever system from the screw and platen mechanism.

Other important Stanhope press changes were:

  • All iron construction including a massive frame formed in one piece
  • A double size platen
  • A regulator that controlled the intensity of the impression

The Stanhope press would undergo several important modifications, the most important of which was strengthening the frame in 1806 to prevent the iron from cracking under the stress of repeated impressions. The second design—with its characteristic rounded cheeks—is what today is commonly associated with the Stanhope press.

The Times of London immediately adopted the Stanhope press and it became successful across Europe and America in the first few decades of the 1800s. Meanwhile, further developments with all-iron hand presses would continue up to the end of the nineteenth century. However, driven by the rapid advancement of the industrial revolution, the next stage in the evolution of press design—the introduction of cylinders and steam power—would rapidly eclipse Stanhope’s accomplishments.

Stanhope the statesmen

Charles 3rd Earl Stanhope was an unusual man. In addition to his many inventions and scientific studies, he devoted himself to radical political causes that often controverted his aristocratic background. He often referred to himself as “Citizen” Stanhope. The origins of his democratic leanings were to be found in the influence of his father—who was a member of Parliament and an outspoken critic of the crown and proponent of Habeas Corpus—his education in the radical environment of Geneva and the Revolutions in America (1776) and France (1789).

Known publicly as Viscount Mahon at the time, Charles was elected to Parliament in 1780 and adopted positions that conflicted with the political elite. His demands for electoral and finance reform and religious tolerance of dissenters and Catholics did not sit well with the establishment. Charles was also known to have campaigned against slavery and was party to the abolition bill known as the Slave Trade Act of 1807.

Stanhope estate at Chevening, Kent. Charles died here on December 15, 1816.

Stanhope estate at Chevening, Kent. Charles died here on December 15, 1816.

Charles Stanhope was an opponent of the war against the thirteen colonies and a supporter of John Wilkes, a British sympathizer of the American rebels. Despite his efforts on behalf of the oppressed and downtrodden in society, Charles Stanhope’s personal eccentricities caused him, especially later in life, to be isolated from his family.

Always thinking of others before himself, he allowed his manse at Chevening, Kent to fall into disrepair and it is speculated that he had starved himself to death on a diet of soup and barley water. Charles Stanhope was interred “as a very poor man” in the family vault at Chevening Church one week after his death on December 15, 1816.

Advertisements

Streaming and the era of on-demand media

Posted in Audio, Digital Media, Video with tags , , , , , , , , , on January 16, 2016 by multimediaman

On January 6, Netflix went live with its video-streaming service in 130 new countries across the globe. The expansion—covering most of the world except for China—was announced by Netflix cofounder and CEO Reed Hastings during a keynote speech at the International Consumer Electronics Show in Las Vegas. Hastings said, “Today, right now, you are witnessing the birth of a global TV network.”

Reed Hastings, CEO of Netflix announcing the global expansion of the streaming video service on January 6

Reed Hastings, CEO of Netflix announcing the global expansion of the streaming video service on January 6

Prior to this latest announcement, Netflix had 40 million subscribers in the US and 20 million subscribers internationally in a total of 60 countries and available in 17 languages. According to Hastings, the company’s goal is to reach 200 countries by the end of 2016 and sign up 90 million US and 450 million worldwide subscribers.

The rapid expansion of Netflix is part of the transformation of TV program and movie viewing that has been underway for a decade or more. While “linear TV”— programming that is presented at specific times and on non-portable screens—is still popular, it is being rapidly overtaken by the new personalized, on-demand and mobile subscription services like Netflix.

According to Netflix, the growth of Internet TV is driven by (1) advancements in Internet reliability and performance, (2) time and place flexibility of on-demand viewing and (3) accelerating innovation of streaming video technology. A possible fourth driver of Netflix’s success is its subscription-based user model. Unlike previous on-demand solutions that often required consumers to purchase one at a time—or rent for a specified period of time—their own copies of movies and music, streaming media solutions like Netflix offer subscribers access to the entire content library without limitations for a monthly fee.

Streaming media

Popular video and music streaming services

Popular video and music streaming services

Streaming media refers to video or audio content that is transmitted in a compressed digital form over the Internet and played immediately, rather than being downloaded onto a computer hard drive or other storage media for later playback. Therefore, users do not need to wait for the entire media file to be sent before playing it; the media file is delivered in a continuous stream and can be watched or listened to as soon as the playing process is able to begin.

Media streaming originated with “elevator music” known as Muzak in the early 1950s. It was a service that transmitted music over electrical lines in retail stores and building lobbies. The first efforts to stream music and video on computers and digital networks ran up against the limitations of CPU performance, network bandwidth and data stream interruptions associated with “buffering.”

Attempts in the 1990s by Microsoft (Windows Media Player), Apple (QuickTime) and RealNetworks (RealPlayer) to develop streaming technologies on desktop computers made important breakthroughs. However, each of these solutions required proprietary file formats and media players that resulted in an unworkable system for users.

By the early 2000s, the adoption of broadband internet and improvements in CPU and data throughput along with efforts to create a single, unified format led to the adoption of Adobe Flash as a de facto standard for streaming media. By 2005, when the social media and video sharing service YouTube was established, Flash became the dominant streaming technology on the Internet. More recently—especially since 2011—HTML5 has advanced as an international standard on computers and mobile devices and it will eventually supplant Flash.

Music industry streaming revenue is growing fast and download revenue is falling

Music industry streaming revenue is growing fast and download revenue is falling

Streaming media has been transforming the music industry along side of TV and movies. While digital downloads still represent the largest percentage of music sales in the US, they are falling. Meanwhile, streaming music services like Pandora, Spotify and Apple Music have already overtaken physical CD sales and represent about one third of the industry’s income. Some analysts expect revenue from music streaming to surpass that of digital downloads in the near future.

Consumers and content

Streaming media has fundamentally shifted the relationship between consumers and entertainment content. During the era of broadcast radio (1920s) and television (1950s), consumers needed a “set” to receive the analog programs of radio stations and TV channels. Meanwhile, audience members had to be on front of their radio or TV—with “rabbit ears” antenna adjusted optimally—on a schedule set by the broadcasters. The cost of programming was paid for by commercial advertising and corporate sponsors.

In the cable and satellite era (1970s), consumers began paying for content with subscription fees and programming was “commercial free.” Along with home recording devices—at first analog magnetic tape systems like VCRs (1970s) and digital recording devices like DVRs (late 1990s)—came an important shift in viewing behavior. Consumers could do what is now called “time shifted viewing,” i.e. they could choose when they wanted to experience the recorded content. 

Vinyl records, magnetic tapes and optical recording formats preceded downloading and streaming

Vinyl records, magnetic tapes and optical recording formats preceded downloading and streaming

At first, music publishers mass produced and marketed analog audio recordings—records (1950s) and then audio tapes (1970s)—and consumers purchased and owned a library of recordings. These records and tapes could be enjoyed at any time and place as long as there was an audio system with a stereo turntable or cassette player available.

The same was true of mass produced CD audio (1980s) and DVD video (2000s) optical discs. While these digital formats improved portability and their quality did not deteriorate from repeated play—the way that analog magnetic and vinyl did—they required a new generation of optical devices. Portable CD (1980s) and DVD players (late 1990s) addressed this issue, but consumers still had to maintain a library of purchased titles.

With digital downloading of music and video over the Internet, content could finally be played anywhere and at anytime on portable digital players like iPods (2001) and notebook PCs. However, consumers were still required to purchase the titles they wanted to enjoy. Instead of owning bookshelves and cabinets full of CD and DVD jewel cases, downloaded electronic files had to be maintained on MP3 players, computer hard drives and digital media servers.

When Internet-based media streaming arrived alongside of mobile and wireless computing the real potential of time and place independent content viewing became a reality. Add to these the subscription model—with (potentially) the entire back catalog of recorded music, TV shows and movies available for a relatively small monthly fee—and consumers began flocking in large numbers to services like Netflix and Spotify.

Streaming media trends to watch 2016

Media industry analysts have been following the impact of these streaming content and technologies and some of their recent insights and trend analyses are below:

Streaming media device adoption in US households

Streaming media device adoption in US households with broadband Internet

  • Streaming devices:
    • Linear TV content still dominates US households. However, there are signs that streaming media devices such as Roku, Apple TV, Chromecast and Amazon Fire are rapidly shifting things. The adoption of these devices went from about 17% in 2014 to about 28% of US households with broadband internet in 2015 [Park Associates]
Nielsen on demand music streams

On-demand music streaming includes music videos

  • Streaming vs. downloading:
    • Online music streams doubled from 164.5 billion to 317 billions songs
    • Digital song sales dropped 12.5% from 1.1 billion to 964.8 million downloads
    • Digital album sales dropped 2.9% from 106.5 million to 103.3 million downloads [Nielsen 2015 Music Report]
Cable TV subscriptions have been declining with the rise of "cord cutting" and streaming media

Cable TV subscriptions have been declining with the rise of “cord cutting” and streaming media

  • Cable TV:
    • The cord-cutting trend—households that are ending their cable TV service—is accelerating. Total households with cable subscriptions fell from 83% in 2014 to under 80% in 2015 [Pacific Crest].
    • Scheduled “linear” TV fell and recorded “linear” TV was flat (or even increased slightly) from 2014 to 2015, while streamed on-demand video increased [Ericsson ConsumerLab].

While streaming audio and video are growing rapidly, traditional radio and TV still represent by far the largest percentages of consumer activity. Obviously, some of the cultural and behavior changes involved in streaming media run up against audience demographics: some older consumers are less likely to shift their habits while some younger consumers have had fewer or no “linear” experiences.

As the Ericsson ConsumerLab study shows, teenagers spend less than 20% of their TV viewing time watching a TV screen; the other 80% is spent in front of desktop and laptop computers, tablets and smartphones. Despite these differences, streaming content use is soaring and the era of “linear” media is rapidly coming to an end. Just like the relationship between eBooks and print books, the electronic alternative is expanding rapidly while the analog form persists and, in some ways, is stronger than ever. Nonetheless, the new era of time and place independent on-demand media is fast approaching.

Adrian Frutiger (1928–2015): Univers and OCR-B

Posted in People in Media History, Phototypesetting, Print Media, Typography with tags , , , , , , , , , , , , , , , , , , , , , , on December 21, 2015 by multimediaman
Adrian Frutiger: May 24, 1928 – September 10, 2015

Adrian Frutiger: May 24, 1928 – September 10, 2015

Adrian Frutiger died on September 10, 2015 at the age of 87. He was one of the most important type designers of his generation, having created some 40 fonts, many of them still widely used today. He was also a teacher, author and specialist in the language of graphic expression and—since his career spanned metal, photomechanical and electronic type technologies—Frutiger became an important figure in the transition from the analog to the digital eras of print communications.

Frutiger was born on May 24, 1928 in the town of Interseen, near Interlaken and about 60 kilometers southeast of the city of Bern, Switzerland. His father was a weaver. As a youth, Adrian showed an interest in handwriting and lettering. He was encouraged by his family and secondary school teachers to pursue an apprenticeship rather than a fine arts career.

Adrian Frutiger around the time of his apprenticeship

Adrian Frutiger around the time of his apprenticeship

At age 16, Adrian obtained a four-year apprenticeship as a metal type compositor with the printer Otto Schlaeffli in Interlaken. He also took classes in drawing and woodcuts at a business school in the vicinity of Bern. In 1949, Frutiger transferred to the School of Applied Arts in Zürich, where he concentrated on calligraphy. In 1951, he created a brochure for his dissertation entitled, “The Development of the Latin Alphabet” that was illustrated with his own woodcuts.

It was during his years in Zürich that Adrian worked on sketches for what would later become the typeface Univers, one of the most important contributions to post-war type design. In 1952, following his graduation, Frutiger moved to Paris and joined the foundry Deberny & Peignot as a type designer.

During his early work with the French type house, Frutiger was engaged in the conversion of existing metal type designs for the newly emerging phototypesetting technologies. He also designed several new typefaces— Président, Méridien, and Ondine—in the early 1950s.

San serif and Swiss typography

San serif type is a product of the twentieth century. Also known as grotesque (or grotesk), san serif fonts emerged with commercial advertising, especially signage. The original san serif designs (beginning in 1898) possessed qualities—lack of lower case letters, lack of italics, the inclusion of condensed or extended widths and equivalent cap and ascender heights—that seemingly violated the rules of typographic tradition. As such, these early san serif designs were often considered too clumsy and inelegant for the professional type houses and their clients.

Rudolf Koch, Kabel, 1927

Rudolf Koch, Kabel, 1927

Paul Renner, Futura, 1927

Paul Renner, Futura, 1927

Eric Gill, Gill Sans, 1927

Eric Gill, Gill Sans, 1927

Along with the modern art and design movements of the early twentieth century, a reconsideration of the largely experimental work of the first generation of sans serif types began in the 1920s. Fonts such as Futura, Kabel and Gill Sans incorporated some of the theoretical concepts of the Bauhaus and DeStijl movements and pushed sans serif to new spheres of respectability.

However, these fonts—which are still used today—did not succeed in elevating san serif beyond headline usage and banner advertising and into broader application. Sans serif type remained something of an oddity and not yet accepted by the traditional foundry industry as viable in terms of either style or legibility.

In the 1930s, especially within the European countries that fell to dictatorship prior to and during World War II, there was a backlash against modernist conceptions. Sans serif type came under attack, was derided as “degenerate” and banned in some instances. Exceptions to this trend were in the US, where the use of grotesque types was increasing, and Switzerland, where the minimalist typographic ideas of the Bauhaus were brought by designers who had fled the countries ruled by the Nazis.

The Bauhaus School, founded in 1919 in Weimar, Germany, was dedicated to the expansion of the modernist esthetic

The Bauhaus School, founded in 1919 in Weimar, Germany, was dedicated to the expansion of the modernist esthetic

After the war, interest in sans serif type design was renewed as a symbol of modernism and a break from the first four decades of the century. By the late 1950s, the most successful period of san serif type opened up and the epicenter of this change emerged in Switzerland, signified by the creation of Helvetica (1957) by Eduard Hoffmann and Max Miedinger of the Haas Type Foundry in Münchenstein.

It was the nexus of the creative drive to design the definitively “modern” typeface and the possibilities opened up by the displacement of metal type with phototypesetting that brought san serif from a niche font into global preeminence.

Frutiger’s Univers

This was the cultural environment that influenced Adrian Frutiger as he set about his work on a new typeface as a Swiss trained type designer at a French foundry. As Frutiger explained in a 1999 interview with Eye Magazine, “When I came to Deberny & Peignot in Paris, Futura (though it was called Europe there) was the most important font in lead typesetting. Then one day the question was raised of a grotesque for the Lumitype-Photon [the first phototypesetting system]. …

“I asked him [Peignot] if I might offer an alternative. And within ten days I constructed an entire font system. When I was with Käch I had already designed a thin, normal, semi-bold and italic Grotesque with modulated stroke weights. This was the precursor of Univers. … When Peignot saw it he almost jumped in the air: ‘Good heavens, Adrian, that’s the future!’ ”

An early diagram of Frutiger’s Univers in 1955 shows the original name “Monde”

An early diagram of Frutiger’s Univers in 1955 shows the original name “Monde”

Final diagram of Frutiger’s 21 styles of Univers in 1955

Final diagram of Frutiger’s 21 styles of Univers in 1955

Originally calling his type design “Monde” (French for “world”), Frutiger’s innovation was that he designed 21 variations of Univers from the beginning; for the first time in the history of typography a complete set of typefaces were planned precisely as a coherent system. He also gave the styles and weights a numbering scheme beginning with Univers 55. The different weights (extended, condensed, ultra condensed, etc.) were numbered in increments of ten, i.e. 45, 65, 75, 85 and styles with the same line thickness were numbered in single digit increments (italics were the even numbers), i.e. 53, 56, 57, 58, 59, etc.

Univers was released by Deberny & Peignot in 1957 and it was quickly embraced internationally for both text and display type purposes. Throughout the 1960s and 70s, like Helvetica, it was widely used for corporate identity (GE, Lufthansa, Deutsche Bank). It was the official promotional font of the 1972 Munich Olympic Games.

Frutiger explained the significance of his creation in the interview with Eye Magazine, “It happened to be the time when the big advertising agencies were being set up, they set their heart on having this diverse system. This is how the big bang occurred and Univers conquered the world. But I don’t want to claim the glory. It was simply the time, the surroundings, the country, the invention, the postwar period and my studies during the war. Everything led towards it. It could not have happened any other way.”

Computers and digital typography

Had Adrian Frutiger retired at the age of 29 after designing Univers, he would have already made an indelible contribution to the evolution of typography. However, his work was by no means complete. By 1962, Frutiger had established his own graphic design studio with Bruno Pfaffli and Andre Gurtler in Arcueil near Paris. This firm designed posters, catalogs and identity systems for major museums and corporations in France.

Throughout the 1960s, Frutiger continued to design new typefaces for the phototypesetting industry such as Lumitype, Monotype, Linotype and Stempel AG. Among his most well-known later san serif designs were Frutiger, Serifa and Avenir. Frutiger’s font systems can be seen to this day on the signage at Orly and Charles de Gaulle airports and the Paris Metro.

The penetration of computers and information systems into the printing and publishing process were well underway by the 1960s. In 1961, thirteen computer and typewriter manufacturers founded the European Computer Manufacturers Association (ECMA) based in Geneva. A top priority of the EMCA was to create an international standard for optical character recognition (OCR)—a system for capturing the image of printed information and numbers and converting them into electronic data—especially for the banking industry.

By 1968, OCR-A was developed in the US by American Type Founders—a trust of 23 American type foundries—and it was later adopted by the American National Standards Institute. This was the first practically adopted standard mono-spaced font that could be read by both machines and the human optical system.

However, in Europe the ECMA wanted a font that could be used as an international standard such that it accommodated the requirements of all typographic considerations and computerized scanning technologies all over the world. Among the issues, for example, were the treatment of the British pound symbol (£) and the Dutch IJ and French oe (œ) ligatures. Other technical considerations included the ability to integrate OCR standards with typewriter and letterpress fonts in addition to the latest phototypesetting systems.

Comparison of OCR-A (1968) with Frutiger’s OCR-B (1973)

Comparison of OCR-A (1968) with Frutiger’s OCR-B (1973)

In 1963, Adrian Frutiger was approached by representatives of the ECMA and asked to design OCR-B as an international standard with a non-stylized alphabet that was also esthetically pleasing to the human eye. Over the next five years, Frutiger showed the exceptional ability to learn the complicated technical requirements of the engineers: the grid systems of the different readers, the strict spacing requirements between characters and the special shapes needed to make one letter or number optically distinguishable from another.

In 1973, after multiple revisions and extensive testing, Adrian Frutiger’s OCR-B was adopted as an international standard. Today, the font can be most commonly found on UPC barcodes, ISBN barcodes, government issued ID cards and passports. Frutiger’s OCR-B font will no doubt live on into the distant future—alongside various 2D barcode systems—as one of the primary means of translating analog information into digital data and back again.

Frutigers Sign and Symbols 1989

Frutiger’s 1989 English translation of “Signs and Symbols: Their Design and Meaning”

Adrian Frutiger’s type design career extended well into the era of desktop publishing, PostScript fonts and the Internet age. In 1989, Frutiger published the English translation of Signs and Symbols: Their Design and Meaning a theoretical and retrospective study of the two-dimensional expression of graphic drawing with typography among its most advanced forms. For someone who spent his life working on the nearly imperceptible detail of type and graphic design, Frutiger exhibited an exceptional grasp of the historical and social sources of man’s urge toward pictographic representation and communication.

As an example, Frutiger wrote in the introduction to his book, “For twentieth century humans, it is difficult to imagine a void, a chaos, because they have learned that a kind of order appears to prevail in both the infinitely small and the infinitely large.  The understanding that there is no element of chance around or in us, but that all things, both mind and matter, follow an ordered pattern, supports the argument that even the simplest blot or scribble cannot exist by pure chance or without significance, but rather that the viewer does not clearly recognize the causes, origins, and occasion of such a ‘drawing’.”

Where is VR going and why you should follow it

Posted in Digital Media, Mobile Media, Social Media, Video with tags , , , , , , , , , , , , , , , , , , , , , , on November 15, 2015 by multimediaman
Promotional image for Oculus Rift VR headset

Promotional image for Oculus Rift VR headset

On November 2, video game maker Activision Blizzard Entertainment announced a $5.9 billion purchase of King Digital Entertainment, maker of the mobile app game Candy Crush Saga. Activision Blizzard owns popular titles like Call of Duty, World of Warcraft and Guitar Hero—with tens of millions sold—for play on game consoles and PCs. By comparison, King has more than 500 million worldwide users playing Candy Crush on TVs, computers and (mostly) mobile devices.

While it is not the largest-ever acquisition of a game company—Activision bought Blizzard in 2008 for $19 billion—the purchase shows how much the traditional gaming industry believes that future success will be tied to mobile and social media. Other recent acquisitions indicate how the latest in gaming hardware and software have become strategically important for the largest tech companies:

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014

  • September 2014: Microsoft acquired Mojang for $2.5 billion
    Mojang’s Minecraft game has 10 million users worldwide and an active developer community. The Lego-like Minecraft is popular on both Microsoft’s Xbox game console and Windows desktop and notebook PCs. In making the purchase, Microsoft CEO Satya Nadella said, “Gaming is a top activity spanning devices, from PCs and consoles to tablets and mobile, with billions of hours spent each year.”
  • August 2104: Amazon acquired Twitch for $970 million
    The massive online retailer has offered online video since 2006 and the purchase of Twitch—the online and live streaming game service—adds 45 million users to Amazon’s millions of Prime Video subscribers and FireTV (stick and set top box) owners. Amazon’s CEO Jeff Bezos said of the acquisition, “Broadcasting and watching gameplay is a global phenomenon and Twitch has built a platform that brings together tens of millions of people who watch billions of minutes of games each month.”
  • March 2014: Facebook acquired Oculus for $2 billion
    Facebook users take up approximately 20% of all the time that people spend online each day. The Facebook acquisition of Oculus—maker of virtual reality headsets—is an anticipation that social media will soon soon include an immersive experience as opposed to scrolling through rectangular displays on PCs and mobile devices. According to Facebook CEO Mark Zuckerberg, “Mobile is the platform of today, and now we’re also getting ready for the platforms of tomorrow. Oculus has the chance to create the most social platform ever, and change the way we work, play and communicate.”

The integration of gaming companies into the world’s largest software, e-commerce and social media corporations is further proof that media and technology convergence is a powerful force drawing many different industries together. As is clear from the three CEO quotes above, a race is on to see which company can offer a mix of products and services sufficient to dominate the number of hours per day the public spends consuming information, news and entertainment on their devices.

What is VR?

Among the most important current trends is the rapid growth and widespread adoption of virtual reality (VR). Formerly of interest to hobbyists and gaming enthusiasts, VR technologies are now moving into mainstream daily use.

A short definition of VR is a computer-simulated artificial world. More broadly, VR is an immersive multisensory, multimedia experience that duplicates the real world and enables users to interact with the virtual environment and with each other. In the most comprehensive VR environments, the sight, sound, touch and smell of the real world are replicated.

Current and most commonly used VR technologies include a stereoscopic headset—which tracks the movement of a viewer’s head in 3 dimensions—and surround sound headphones that add a spatial audio experience. Other technologies such as wired gloves and omnidirectional treadmills can provide tactile and force feedback that enhance the recreation of the virtual environment.

New York Times VR promtion

The New York Times’ VR promotion included a Google Cardboard viewer that was sent along with the printed newspaper to 1 million subscribers

Recent events have demonstrated that VR use is becoming more practical and accessible to the general public:

  • On October 13, in a partnership between CNN and NextVR, the presidential debate was broadcast in VR as a live stream and stored for later on demand viewing. The CNN experience made it possible for every viewer to watch the event as though they were present, including the ability to see other people in attendance and observe elements of the debate that were not visible to the TV audience. NextVR and the NBA also employed the same technology to broadcast the October 27 season opener between the Golden State Warriors and New Orleans Pelicans, the first-ever live VR sporting event.
  • On November 5, The New York Times launched a VR news initiative that included the free distribution of Google Cardboard viewers—a folded up cardboard VR headset that holds a smartphone—to 1 million newspaper subscribers. The Times’ innovation required users to download the NYTvr app to their smartphone in order to watch a series of short news films in VR.

Origins of VR

Virtual reality is the product of the convergence of theater, camera, television, science fiction and digital media technologies. The basic ideas of virtual reality go back more than two hundred years and coincide with the desire of artists, performers and educators to recreate scenes and historical events. In the early days this meant painting panoramic views, constructing dioramas and staging theatrical productions where viewers had a 360˚ visual surround experience.

In the late 19th century, hundreds of cycloramas were built—many of them depicting major battles of the Civil War—where viewers sat in the center of a circular theater as the timeline of the historical event moved and was recreated around them in sequence. In 1899, a Broadway dramatization of the novel Ben Hur employed live horses galloping straight toward the audience on treadmills as a backdrop revolved in the opposite direction creating the illusion of high speed. Dust clouds were employed to provide additional sensory elements.

Kromscop viewer invented by Frederic Eugene Ives at the beginning of the 20th century

Frederic Eugene Ives’ Kromscop viewer

Contemporary ideas about virtual reality are associated with 3-D photography and motion pictures of the early twentieth century. Experimentation with color stereoscopic photography began in the late 1800s and the first widely distributed 3-D images were of the 1906 San Francisco earthquake and taken by Frederic Eugene Ives. As with present day VR, Ives’ images required both a special camera and viewing device called the Kromskop in order to see 3-D effect.

1950s-era 3-D View-Master with reels

1950s-era 3-D View-Master with reels

3-D photography was expanded and won popular acceptance beginning in the late 1930s with the launch of the View-Master of Edwin Eugene Mayer. The virtual experience of the View-Master system was enhanced with the addition of sound in 1970. Mayer’s company was eventually purchased by toy maker Mattel and later by Fischer-Price and the product remained successful until the era of digital photography in the early 2000s.

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

Experiments with stereoscopic motion pictures were conducted in the late 1800s. The first practical application of a 3-D movie took place in 1922 using the Teleview system of Laurens Hammond (inventor of the Hammond Organ) with a rotating shutter viewing device attached to the armrest of the theater seats.

Prefiguring the present-day inexpensive VR headset, the so-called “golden era” of 3-D film began in the 1950s and included cardboard 3-D glasses. Moviegoers got their first introduction to 3-D with stereophonic sound in 1953 with the film House of Wax starring Vincent Price. The popular enthusiasm for 3-D was eventually overtaken by the practical difficulties associated with the need to project two separate film reels in perfect synchronization.

1950s 3-D glasses and a movie audience wearing them

1950s 3-D glasses and a movie audience wearing them

Subsequent waves of 3-D movies in the second half of the twentieth century—projected from a single film strip—were eventually displaced by the digital film and audio methods associated with the larger formats and Dolby Digital sound of Imax, Imax Dome, Omnimax and Imax 3D. Anyone who has experienced the latest in 3-D animated movies such as Avatar (2009) can attest to the mesmerizing impact of the immersive experience made possible by the latest in these movie theater techniques.

Computers and VR

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

It is widely acknowledged that the theoretical possibility of creating virtual experiences that “convince” all the senses of their “reality” began with the work of Ivan Sutherland at MIT in the 1960s. Sutherland invented in 1966 the first head-mounted display—nicknamed the “Sword of Damocles”—that was designed to immerse the viewer in a simulated 3-D environment. In a 1965 essay called “The Ultimate Display,” Sutherland wrote about how computers have the ability to construct a “mathematical wonderland” that “should serve as many senses as possible.”

With increases in the performance and memory capacity of computers along with the decrease in the size of microprocessors and display technologies, Sutherland’s vision began to take hold in the 1980s and 1990s. Advances in vector based CGI software, especially flight simulators created by government researchers for military aircraft and space exploration, brought the term “reality engine” into use. These systems, in turn, spawned notions of complete immersion in “cyberspace” where sight, sound and touch are dominated by computer system generated sensations.

The term “virtual reality” was popularized during these years by Jaron Lanier and his VPL Laboratory. With VR products such as the Data Glove, the Eye Phone and Audio Sphere, Lanier combined with game makers at Mattel to create the first virtual experiences with affordable consumer products, despite their still limited functionality.

By the end of the first decade of the new millennium, many of the core technologies of present-day VR systems were developed enough to make simulated experiences more convincing and easy to use. Computer animation technologies employed by Hollywood and video game companies pushed the creation of 3-D virtual worlds to new levels of “realness.”

An offshoot of VR, called augmented reality (AR), took advantage of high resolution camera technologies and allowed virtual objects to appear within the actual environment and enabled users to view and interact with them on computer desktop and mobile displays. AR solutions became popular with advertisers offering unique promotional opportunities that capitalized on the ubiquity of smartphones and tablets.

Expectations

Scene from the 2009 movie Avatar

A scene from the 2009 animated film “Avatar”

Aside from news, entertainment and advertising, there are big possibilities opening up for VR in many business disciplines. Some experts expect that VR will impact almost every industry in a manner similar to that of PCs and mobile devices. Entrepreneurs and investors are creating VR companies with the aim of exploiting the promise of the new technology in education, health care, real estate, transportation, tourism, engineering, architecture and corporate communications (to name just a few).

Like consumer-level artificial intelligence, i.e. Apple Siri and Amazon Echo, present-day virtual reality technologies tend to fall frustratingly short of expectations. However, with the rapid evolution of core technologies—processors, software, video displays, sound, miniaturization and haptic feedback systems—it is conceivable that VR is ripe for a significant leap in the near future.

In many ways, VR is the ultimate product of media convergence as it is the intersection of multiple and seemingly unrelated paths of scientific development. As pointed out by Howard Rheingold in his authoritative 1991 book Virtual Reality, “The convergent nature of VR technology is one reason why it has the potential to develop very quickly from a scientific oddity into a new way of life … there is a significant chance that the deep cultural changes suggested here could happen faster than anyone has predicted.”

Hermann Zapf (1918–2015): Digital typography

Posted in Digital Media, People in Media History, Phototypesetting, Typography with tags , , , , , , , , , , , , , on September 30, 2015 by multimediaman
Hermann Zapf: November 8, 1918 – June 4, 2015

Hermann Zapf: November 8, 1918 – June 4, 2015

On Friday, June 12, Apple released its San Francisco system font for OSX, iOS and watchOS. Largely overlooked amid the media coverage of other Apple product announcements, the introduction of San Francisco was a noteworthy technical event.

San Francisco is a neo-grotesk, sans serif and Pan European typeface with characters in Latin as well as Cyrillic and Greek scripts. It is significant because it is the first font to be designed specifically for all of Apple’s display technologies. Important variations have been introduced into San Francisco to optimize its readability on Apple desktop, notebook, TV, mobile and watch devices.

It is also the first font designed by Apple in two decades. San Francisco extends Apple’s association with typographic innovation that began in the mid-1980s with desktop publishing. From a broader historical perspective, Apple’s new font confirms of the ideas developed more than fifty years ago by renowned calligrapher and type designer Hermann Zapf. Sadly, Zapf died at the age of 96 on June 4, 2015 just one week before Apple’s San Francisco announcement.

Hermann Zapf’s contributions to typography are extensive and astonishing. He designed more than 200 typefaces—the popular Palatino (1948), Optima (1952), Zapf Dingbats (1978) and Zapf Chancery (1979) among them—including fonts in Arabic, Pan-Nigerian, Sequoia and Cherokee. Meanwhile, Zapf’s exceptional calligraphic skills were such that he famously penned the Preamble of the Charter of the United Nations in four languages for the New York Pierpont Morgan Library in 1960.

Preamble of the charter of The United Nations

Zapf’s calligraphic skills were called upon for the republication of the Preamble of the UN Charter in 1960 for the Pierpont Morgan Library in New York City.

While he made many extraordinary creative accomplishments—far too many to list here—Hermann Zapf’s greatest legacy is the way he thought about type and its relationship to technology as a whole. Herman Zapf was among the first and perhaps the most important typographers to theorize about the need for new forms of type driven by computer and digital technologies.

Early life

Hermann Zapf was born in Nuremburg on November 8, 1918 during the turbulent times at the end of World War I. As he wrote later in life, “On the day I was born, a workers’ and soldiers’ council took political control of the city. Munich and Berlin were rocked by revolution. The war ended, and the Republic was declared in Berlin on 9 November 1918. The next day Kaiser Wilhelm fled to Holland.”

At school, Hermann took an interest in technical subjects. He spent time in the library reading scientific journals and at home, along with his older brother, experimenting with electronics. He also tried hand lettering and created his own alphabets.

Hermann left school in 1933 with the intention of becoming an engineer. However, economic crisis and upheaval in Germany—including the temporary political detention of his father in March 1933 at the prison camp in Dachau—prevented him from pursuing his plans.

Apprentice years

Barred from attending the Ohm Technical Institute in Nuremberg for political reasons, Hermann sought an apprenticeship in lithography. He was hired in February 1934 to a four-year apprenticeship as a photo retoucher by Karl Ulrich and Company.

In 1935, after reading books by Rudolf Koch and Edward Johnson on lettering and illuminating techniques, Hermann taught himself calligraphy. When management saw the quality of Hermann’s lettering, the Ulrich firm began to assign him work outside of his retouching apprenticeship.

Hermann refused to take the test at his father’s insistence on the grounds that the training had been interrupted by many unrelated tasks. He never received his journeyman’s certificate and left Nuremburg for Frankfurt to find work.

Zapf’s Gilgengart designed originally in 1938

Zapf’s Gilgengart designed originally in 1938

Zapf started his career in type design at the age of 20 after he was employed at the Fürsteneck Workshop House, a printing establishment run by Paul Koch, the son of Rudolf Koch. As he later explained, “It was through the print historian Gustav Mori that I first came into contact with the D. Stempel AG type foundry and Linotype GmbH in Frankfurt. It was for them that I designed my first printed type in 1938, a fraktur type called ‘Gilgengart’.”

War years

Hermann Zapf was conscripted in 1939 and called up to serve in the German army near the town of Pirmasens on the French border. After a few weeks, he developed heart trouble and was transferred from the hard labor of shovel work to the writing room where he composed camp reports and certificates.

When World War II started, Hermann was dismissed for health reasons. In April 1942 he was called up again, this time for the artillery. Hermann was quickly reassigned to the cartographic unit where he became well-known for his exceptional map drawing skills. He was the youngest cartographer in the German army through the end of the war.

An example of calligraphy from the sketchbook that Hermann Zapf kept during World War II.

An example of calligraphy from the sketchbook that Hermann Zapf kept during World War II.

Zapf was captured after the war by the French and held in a field hospital in Tübingen. As he recounted, “I was treated very well and they even let me keep my drawing instruments. They had a great deal of respect for me as an ‘artiste’ … Since I was in very poor health, the French sent me home just four weeks after the end of the war. I first went back to my parents in my home town of Nuremberg, which had suffered terrible damage.”

Post-war years

In the years following the war, Hermann taught and gave lessons in calligraphy in Nuremberg. In 1947, he returned to Frankfurt and took a position with the Stempel AG foundry with little qualification other than his sketch books from the war years.

From 1948 to 1950, while he worked at Stempel on typography designs for metal punch cutting, he developed a specialization in book design. Hermann also continued to teach calligraphy twice a week at the Arts and Crafts School in Offenbach.

Zapf’s Palatino (1948) and Optima (1952) fonts

Zapf’s Palatino (1948) and Optima (1952) fonts

It was during these years, that Zapf designed Palatino and Optima. Working closely with the punch cutter August Rosenberg, Hermann design Palatino and named it after the 16th century Italian master of calligraphy Giambattista Palatino. In the Palatino face, Zapf attempted to emulate the forms of the great humanist typographers of the Renaissance.

Optima, on the other hand, expressed more directly the genius of Zapf’s vision and foreshadowed his later contributions. Optima can be described as a hybrid serif-and-sans serif typeface because it blends features of both: serif-less thick and thin strokes with subtle swelling at the terminals that suggest serifs. Zapf designed Optima during a visit to Italy in 1950 when he examined inscriptions at the Basilica di Santa Croce in Florence. It is remarkably modern, yet clearly derived from the Roman monumental capital model.

By the time Optima was released commercially by Stempel AG in 1958, the industry had begun to move away from metal casting methods and into phototypesetting. As many of his most successful fonts were reworked for the new methods, Zapf recognized—perhaps before and more profoundly than most—that phototypesetting was a transitional technology on the path from analog to an entirely new digital typography.

Digital typography

To grasp the significance of Zapf’s work, it is important to understand that, although “cold” photo type was an advance over “hot” metal type, both are analog technologies, i.e. they require the transfer of “master” shapes from manually engraved punches or hand drawn outlines to final production type by way of molds or photomechanical processes.

Due to the inherent limitations of metal and photomechanical media, analog type masters often contain design compromises. Additionally, the reproduction from one master generation to the next has variations and inconsistencies connected with the craftsmanship of punch cutting or outline drawing.

With digital type, the character shapes exist as electronic files that “describe” fonts in mathematical vector outlines or in raster images plotted on an XY coordinate grid. With computer font data, typefaces have many nuances and features that could never be rendered in metal or photo type. Meanwhile, digital font masters can be copied precisely without any quality degradation from one generation to the next.

Hermann Zapf in 1960

Hermann Zapf in 1960

From the earliest days of computers, Hermann Zapf began advocating for the advancement of digital typography. He argued that type designers needed to take advantage of the possibilities opened up by the new technologies and needed to create types that reflected the age. Zapf also combined knowledge of the rules of good type design with a recognition that fonts needed to be created specifically for electronic displays (at that time CRT-based monitors and televisions).

In 1959, at the age of 41, Zapf wrote in an industry journal, “It is necessary to combine the purpose, the simplicity and the beauty of the types, created as an expression of contemporary industrial society, into one harmonious whole. We should not seek this expression in imitations of the Middle Ages or in revivals of nineteenth century material., as sometimes seems the trend; the question for us is satisfying tomorrow’s requirements and creating types that are a real expression of our time but also represent a logical continuation of the typographic tradition of the western world.”

Warm reception in the US

 Despite a very cold response in Germany—his ideas about computerized type were rejected as “unrealistic” by the Technical University in Darmstadt where he was a lecturer and by leading printing industry representatives—Hermann persevered. Beginning in the early 1960s, Zapf delivered a series of lectures in the US that were met with enthusiasm.

For example, a talk he delivered at Harvard University in October 1964 became so popular that it led to an offer for a professorship at the University of Texas in Austin. The governor even also made Hermann an “Honorary Citizen of the State of Texas.” In the end, Zapf turned down the opportunity due to family obligations in Germany.

Among his many digital accomplishments are the following:

  • Rudolf Hell

    Rudolf Hell

    When digital typography was born in 1964 with the Digiset system of Rudolf Hell, Hermann Zapf was involved. By the early 1970s, Zapf created some of the first fonts designed specifically for any digital system: Marconi, Edison, and Aurelia.

  • In 1976, Hermann was asked to head a professorship in typographic computer programming at Rochester Institute of Technology (RIT) in Rochester, New York, the first of its kind in the world. Zapf taught at RIT for ten years and was able to develop his conceptions in collaboration with computer scientists and representatives of IBM and Xerox.
  • With Aaron Burns

    With Aaron Burns

    In 1977, Zapf partnered with graphic designers Herb Lubalin and Aaron Burns and founded Design Processing International, Inc. (DPI) in New York City. The firm developed software with menu-driven typesetting features that could be used by non-professionals. The DPI software was focused on automating hyphenation and justification as opposed to the style of type design.

  • In 1979, Hermann began a collaboration with Professor Donald Knuth of Stanford University to develop a font that was adaptable for mathematical formulae and symbols.
  • With Peter Karnow

    With Peter Karnow

    In the 1990s, Hermann Zapf continued to focus on the development of professional typesetting algorithms with his “hz -program” in collaboration with Peter Karow of the font company URW. Eventually the Zapf composition engine was incorporated by Adobe Systems into the InDesign desktop publishing software.

Zapf’s legacy

Hermann Zapf actively participated—into his 70s and 80s—in some of the most important developments in type technology of the past fifty years. This was no accident. He possessed both a deep knowledge of the techniques and forms of type history and a unique appreciation for the impact of information technologies on the creation and consumption of the written word.

In 1971, Zapf gave a lecture in Stockholm called “The Electronic Screen and the Book” where he said, “The problem of legibility is as old as the alphabet, for the identification of a letterform is the basis of its practical use. … To produce a clear, readable text that is pleasing to the eye and well arranged has been the primary goal of typography in all the past centuries. With a text made visible on a CRT screen, new factors for legibility are created.”

More than 40 years before the Apple design team set out to create a font that is legible on multiple computer screens, the typography visionary Hermann Zapf was theorizing about the very same questions.

The mobile juggernaut

Posted in Mobile, Mobile Media, Social Media with tags , , , , , , , on August 31, 2015 by multimediaman
Mark Zuckerberg

Mark Zuckerberg

On August 27, Mark Zuckerberg posted the following message on his personal Facebook account, “We just passed an important milestone. For the first time ever, one billion people used Facebook in a single day. On Monday, 1 in 7 people on Earth used Facebook to connect with their friends and family.”

The Facebook one-billion-users-in-a-single-day accomplishment on August 24, 2015 is remarkable for the social network that was started by Zuckerberg and a group of college dormitory friends in 2004. With Facebook becoming available for public use less than ten years ago, the milestone illustrates the speed and extent to which social media has penetrated the daily lives of people all over the world.

While Facebook is very popular in the US and Canada, 83.1% of the 1 billion daily active users (DAUs) come from other parts of the world. Despite being barred in China—where there are 600 million internet users—Facebook has hundreds of millions of active users in India, Brazil, Indonesia, Mexico, UK, Turkey, Philippines, France and Germany.

Facebook's "Mobile Only" active users.

Facebook’s “Mobile Only” active users.

A major driver behind the global popularity and growth speed of Facebook is the mobile technology revolution. According to published data, Facebook reached an average of 844 million mobile active users during the month of June 2015 and industry experts are expecting this number to hit one billion in the very near future. Clearly, without smartphones, tablets and broadband wireless Internet access, Facebook could not have achieved the DAU milestone since many of the one billion people are either “mobile first” or “mobile only” users.

From mobile devices to wearables

When I last wrote about mobile technologies two-and-half years ago, the rapid rise of smartphones and tablets and the end of the PC era of computing was a dominant topic of discussion. Concerns were high that significant resources were being shifted toward mobile devices and advertising and away from older technologies and media platforms. The move from PCs and web browsers toward apps on smartphones and tablets was presenting even companies like Facebook and Google with a “mobility challenge.”

Today, while mobile device expansion has slowed and the dynamics within the mobile markets are becoming more complex, the overall trend of PC displacement continues. According to IDC, worldwide tablet market growth is falling, smartphone market growth is slowing and the PC market is shrinking. On the whole, however, smartphone sales represent more than 70% of total personal computing device shipments and, according to an IDC forecast, this will reach nearly 78% in 2019.

IDC's Worldwide Device Market 5 Year Forecast

IDC’s Worldwide Device Market 5 Year Forecast

According to IDC’s Tom Mainelli, “For more people in more places, the smartphone is the clear choice in terms of owning one connected device. Even as we expect slowing smartphone growth later in the forecast, it’s hard to overlook the dominant position smartphones play in the greater device ecosystem.”

While economic troubles in China and other market dynamics have led some analysts to the conclude that the smartphone boom has peaked, it is clear that consumers all over the world prefer the mobility, performance and accessibility of their smaller devices.

Ercisson's June 2015 Mobility Report projects 6.1 billion smartphone users by 2020.

Ercisson’s June 2015 Mobility Report projects 6.1 billion smartphone users by 2020.

According to the Ericsson Mobility Report, there will be 6.1 billion smartphone users by 2020. That is 70% of the world’s population.

Meanwhile, other technology experts are suggesting that wearables—smartwatches, fitness devices, smartclothing and the like—are expanding the mobile computing spectrum and making it more complex. Since many wearable electronic products integrate easily with smartphones, it is expected this new form will push mobile platforms into new areas of performance and power.

Despite the reserved consumer response to the Apple Watch and the failure of Google Glass, GfK predicts that 72 million wearables will be sold in 2015. Other industry analysts are also expecting wearables to become untethered from smartphones and usher in the dawn of “personalized” computing.

Five mobile trends to watch

With high expectations that mobile tech will continue to play a dominant role in the media and communications landscape, these are some major trends to keep an eye on:

Wireless Broadband: Long Term Evolution (LTE) connectivity reached 50% of the worldwide smartphone market by the end of 2014 and projections show this will likely be at 60% by the end of this year. A new generation of mobile data technology has appeared every ten years since 1G was introduced in 1981. The fourth generation (4G) LTE systems were first introduced in 2012. 5G development has been underway for several years now and it promises speeds of several tens of megabits per user with an expected commercial introduction sometime in the early 2020s.

Apple's A8 mobile processor is 50 times faster than the original iPhone processor.

Apple’s A8 mobile processor is 50 times faster than the original iPhone processor.

Mobile Application Processors: Mobile system-on-a-chip (SoC) development is one of the most intensely competitive sectors of computer chip technology today. Companies like Apple, Qualcomm and Samsung are all pushing the capabilities and speeds of their SoCs to get the maximum performance with the least energy consumption. Apple’s SoCs have set the benchmark in the industry for performance and the iPhone6 contains an A8 processor which is 40% more powerful than the previous A7 chip; and it is 50 times faster than the processor in the original iPhone. A new processor A9 will likely be be announced with the next generation iPhone in September 2015 and it is expected to bring a 29% performance boost over the A8.

Pressure Sensitive Screens: Called “force touch” by Apple, this new mobile display capability allows users to apply varying degrees of pressure to trigger specific functions on a device. Just like “touch” functionality—swiping, pinching, etc.—pressure sensitive interaction with a mobile device provides a new dimension to human-computer-interface. This feature was originally launched by Apple with the release of the Apple Watch which has a limited screen dimension on which to perform touch functions.

Customized Experiences: With mobile engagement platforms, smartphone users can receive highly targeted promotions and offers based upon their location within a retail establishment. Also known as proximity marketing, the technology uses mobile beacons with Bluetooth communications to send marketing text messages and other notifications to a mobile device that has been configured to receive them.

Mobile Apps: The mobile revolution has been a disruptive force for the traditional desktop software industry. Microsoft is now offering its Office Suite of applications to both iOS and Android users free of charge. In August, Adobe announced that it would be releasing a mobile and full-featured version of its iconic Photoshop software in October as a free download and as part of its Creative Cloud subscription.

With mobile devices, operating systems, applications and connectivity making huge strides and expanding across the globe by the billions it is obvious that every organization and business should be navigating its way behind this technology juggernaut. This begins with an internal review of your mobile practices:

  • Do you have a mobile communications and/or operations strategy?
  • Is your website optimized for a mobile viewing experience?
  • Are you encouraging the use of smartphones and tablets and building a mobile culture within your organization?
  • Are you using text messaging for any aspect of your daily work?
  • Are you using social media to communicate with your members, staff, prospects or clients?

If the answer to any of these questions is no, then it is time to act.

Efraim “Efi” Arazi (1937–2013): Color electronic prepress systems

Posted in Business systems, People in Media History, Prepress, Print Media with tags , , , , , , , , , , on July 31, 2015 by multimediaman
Efraim “Efi” Arazi: April 14, 1937 – April 14, 2013

Efraim “Efi” Arazi: April 14, 1937 – April 14, 2013

One of the most important achievements of personal computers and mobile wireless technologies is that they have made it possible for the general public to do things that could previously be done only by professionals.

Take video for example: according to YouTube statistics, 300 hours of digital video is uploaded every minute of every day by people all over the world. This remarkable volume of video is being generated because just about anyone can record, edit and upload a high-definition movie from their smartphone. According to a recent Pew Research study, about one third of online adults (ages 18-50) had posted digital video to a website by 2013.

It is easy to take for granted the video production functions that are performed routinely today on inexpensive and easy to use mobile devices. Less than ten years ago, the ability to capture and edit HD video would have cost tens of thousands of dollars in digital camera and production equipment and required extensive training to use it.

The same can be said for the ability to quickly create a document in a word processing program and insert high resolution graphics anywhere on the page, cropping and scaling as needed. Applying filters and adjusting image quality (contrast, brightness, sharpness) is also second nature as these functions are today available on every mobile device.

CEPS

Four decades ago, before the personal computer existed, electronic image editing, scaling and cropping could only be performed on very expensive prepress systems that cost more than $1 million. That was during the era of what was known as color electronic prepress systems (CEPS) that were built on state-of-the-art minicomputers with reel-to-reel magnetic tape for data storage.

Arazi making a presentation of the Scitex  CEPS equipment in 1979

Arazi making a presentation of the Scitex CEPS equipment in 1979

During the 1960s and 1970s, as commercial offset lithography and film-based color reproduction were overtaking letterpress and single color work, high-end digital electronic production systems were acquired by the big printing companies and major publishers that could afford the investment.

By the 1960s—after analog electronic systems had been widely adopted in pressrooms and prepress and typesetting departments across both Europe and America—a race was on to develop a fully computerized page composing system. Companies like Hell, Crosfield, Dai Nippon Screen and other companies that had been part of the post-war electronics revolution jumped into the market to try and solve the problem of merging text and color photographs together electronically on a computer display.

However, it was a newcomer to the graphic arts industry from Israel called Scitex, founded by Efraim “Efi” Arazi in 1968, that made the highly anticipated breakthrough. Foreshadowing the impact of PC-based desktop publishing on graphic communications in the late 1980s, Scitex introduced digital files and computerization to the prepress production process and forever changed the printing industry.

Scitex

Efi Arazi (born in Jerusalem on April 14, 1937) entered the Israeli military when he was 16 and without graduating from high school. He made a name for himself as an exceptional electronics specialist while working on radar systems in the Israeli air force. Following his military service, with the assistance of the US embassy Arazi was admitted to the Massachusetts Institute of Technology in 1958 as an “extraordinary case” despite his lack of the normally requisite secondary school diploma.

While attending MIT, Arazi also worked at Harvard University’s observatory and digital photography lab. Under the direction of Harvard Professor Mario Grossi, Arazi petitioned NASA and was awarded funds to develop a camera system for scanning the surface of the moon on the unmanned lunar probes in 1966 and 1967. It has also been reported that Arazi’s invention was part of the equipment on the Apollo 11 mission that captured and transmitted video of Neil Armstrong’s first footsteps on the Moon on July 20, 1969.

After earning a bachelor degree in engineering at MIT, Arazi worked in the US for a short time for Itek corporation, a US defense contractor that specialized in spy satellite imagery. In 1967 he returned to Israel and one year later—along with several others who had been educated in the US—founded Scientific Technologies (later shortened to Scitex) with the aim of developing electro-optical devices for commercial purposes.

The Scitex Response 80 system and an example of a stitching designs from it

The Scitex Response 80 system and an example of a stitching design from it

Scitex’s first products were developed for the textile industry. The company sold nearly one hundred electronic systems that automated the process of creating knitting patterns. Since many colors were used in complex fabric designs such as the popular Jacquard pattern, Arazi and his Scitex team developed a scanner (Chroma-Scan) and image manipulation workstation (Response 80) that programmed electronic double-knit stitching looms.

These optical systems replaced manual and time consuming stitch-by-stitch drawings and punch cards that had been widely used in the textile industry up to that time. Scitex also later devised a system for imaging film for printing on textiles that included overprinting, trapping and repeating patterns.

Response 300

Recognizing the potential for new technologies in the growing international printing and publishing industries, Scitex began development in 1975 of a computerized color prepress system. Arazi stunned the graphic arts industry in the Fall of 1979 when he demonstrated the Response 300 system for the first time at the GEC expo in Milan, Italy.

Response 300 included an integrated color drum scanner, image editing workstation and laser film plotter. Directly challenging the domination of high tech graphic arts equipment by Hell (Germany) and Crosfield (UK), Scitex was the first company in the world to combine color image retouching and page makeup onto a single console.

An early model Scitex Response workstation and console

An early model Scitex Response workstation and console

Prior to the Response 300, the electronic color scanning process was based on an analog transfer of color separation information directly from a drum scanner to the film output device. The innovation of Arazi and Scitex was to place a minicomputer (at that time an HP1000) between the scanner and plotter such that the color separations were captured and stored in digital form. The proprietary image files could then be color corrected, retouched, scaled and cropped on screen prior to final output as film separations.

In describing the significance of the accomplishment, industry historian Andy Tribute later explained, “It allowed you to do in real-time on a terminal the sort of things we do in Photoshop now. … I remember watching Efi do a demo where he had a picture of a person with a Rolex watch on and he changed the date in real time on the Rolex. Today that may seem nothing but back then it blew my mind”

Within one year, Scitex had sold $100 million of the Response systems to printers and publishers. Through the mid-1980s, Arazi led Scitex as it developed a suite of products (Raystar, SmartScanner, Whipser, Prisma and Prismax Superstation to name a few) that brought the latest in minicomputer technologies to high-end prepress workflows. Scitex customers gladly paid the $1 million price tag for the flexibility and time savings that Scitex provided.

DTP & EFI

The first European installation of the Response 200 system for the textile industry in 1975

The first European installation of the Response 200 system for the textile industry in 1975

Scitex remained an innovator throughout the 1980s and 1990s as proprietary technologies and CEPS gave way to desktop publishing, industry standard file formats and PostScript workflows. Scitex was among the first prepress technology companies to embrace the introduction of Macintosh computers into graphic arts production.

In 1988, Scitex partnered with Quark technologies—developer of the most sophisticated desktop publishing software at the time—and made it possible for QuarkXPress users to build compound documents with high resolution full color images to be output for both commercial and publication printing.

In 1985, Arazi pushed the industry forward with the development of Handshake, a Scitex product that allowed a wide variety of systems including those of competitors to send and receive data from the Response line of products. Later Scitex was an advocate of Digital Data Exchange Standards along with Hell, Crosfield, Eikonix and others to smooth that transfer of data between all systems in the industry.

In June 1988, Arazi stepped down as President and CEO of Scitex. Six months later, when Mirror Group’s Robert Maxwell acquired a controlling stake in Scitex, Efi Arazi also resigned as chairman of the board. While the company had reached the height of its success with revenues approaching $1 billion and 4,000 employees, Arazi knew that personal computers were transforming the industry and it was time to move on to other business ventures.

After Arazi’s departure, Scitex continued to develop prepress workflow systems, laser imaging equipment, desktop scanners, digital color and soft proofing devices. The company participated in the transition from film-based workflows to the direct-to-plate revolution of the mid-1990s.

Along with all of its competitors, Scitex began to struggle financially and ended up selling its graphic arts group to Vancouver-based competitor Creo Products in 2000. The division of the company that went into digital printing called Scitex Vision was acquired along with the Scitex name by HP in 2005. The remainder of the business was renamed Scailex at that time.

In 1988 Efi Arazi founded Electronics for Imaging (EFI) at the age of 51. The new venture was no less successful then Scitex as EFI raster image processors were integrated in many high quality color laser and toner based printing devices. The EFI Fiery technology quickly became a standard in the graphic arts industry by the 1990s for low cost, high quality color proofs. The company—which bears the first name of its founder as an acronym—later expanded into ink jet printing devices, printing industry productivity software and print server and workflow software tools. Today EFI is one of the most important and successful technology companies in the rapidly changing printing industry. Efraim Arazi died on April 14, 2013 at age 76.