Archive for the Mobile Category

Books, e-books and the e-paper chase

Posted in Digital Media, Mobile, Mobile Media, Paper, Print Media with tags , , , , , , , , , , , , , on March 22, 2016 by multimediaman

Last November Amazon opened its first retail book store in Seattle near the campus of the University of Washington. More than two decades after it pioneered online book sales—and initiated the e-commerce disruption of the retail industry—the $550 billion company seemed to be taking a step backward with its “brick and mortar” Amazon Books.

Amazon Books opened in Seattle on November 3, 2015

Amazon opened its first retail book store in Seattle on November 3, 2015

However, Amazon launched its store concept with a nod to traditional consumer shopping habits, i.e. the ability to “kick the tires.” Amazon knows very well that many customers like to browse the shelves in bookstores and fiddle with electronic gadgets like the Kindle, Fire TV and Echo before they make buying decisions.

So far, the Seattle book store has been successful and Amazon has plans to open more locations. Some unique features of the Amazon.com buying experience have been extended to the book store. Customer star ratings and reviews are posted near book displays; shoppers are encouraged to use the Amazon app and scan bar codes to check prices.

Amazon’s book store initiative was also possibly motivated by the persistence and strength of the print book market. Despite the rapid rise of e-books, print books have shown a resurgence of late. Following a sales decline of 15 million print books in 2013 to just above 500 million units, the past two years have seen an increase to 560 million in 2014 and 570 million in 2015. Meanwhile, the American Booksellers Association reported a substantial increase in independent bookstores over the past five years (1,712 member stores in 2,227 locations in 2015, up from 1,410 in 1,660 locations in 2010).

Print books and e-books

After rising rapidly since 2008, e-book sales have stabilized at between 25% and 30% of total book sales

After rising rapidly since 2008, e-book sales have stabilized at between 25% and 30% of total book sales

The ratio of e-book to print book sales appears to have leveled off at around 1 to 3. This relationship supports recent public perception surveys and learning studies that show the reading experience and information retention properties of print books are superior to that of e-books.

The reasons for the recent uptick in print sales and the slowing of e-book expansion are complex. Changes in the overall economy, adjustments to bookstore inventory from digital print technologies and the acclimation of consumers to the differences between the two media platforms have created a dynamic and rapidly shifting landscape.

As many analysts have insisted, it is difficult to make any hard and fast predictions about future trends of either segment of the book market. However, two things are clear: (1) the printed book will undergo little further evolution and (2) the e-book is headed for rapid and dramatic innovation.

Amazon launched the e-book revolution in 2007 with the first Kindle device. Although digital books were previously available in various computer file formats and media types like CD-ROMs for decades, e-books connected with Amazon’s Kindle took off in popularity beginning in 2008. The most important technical innovation of the Kindle—and a major factor in its success—was the implementation of the e-paper display.

Distinct from backlit LCD displays on most mobile devices and personal computers, e-paper displays are designed to mimic the appearance of ink on paper. Another important difference is that the energy requirements of e-paper devices are significantly lower than LCD-based systems. Even in later models that offer automatic back lighting for low-light reading conditions, e-paper devices will run for weeks on a single charge while most LCD systems require a recharge in less than 24-hours.

Nick Sheridon and Gyricon

The theory behind the Kindle’s ink-on-paper emulation was originated in the 1970s at the Xerox Palo Alto Research Center in California by Nick Sheridon. Sheridon developed his concepts while working to overcome limitations with the displays of the Xerox Alto, the first desktop computer. The early monitors could only be viewed in darkened office environments because of insufficient brightness and contrast.

Nick Sheridon and his team at Xerox PARC invented Gyricon in 1974, a thin layer of transparent plastic composed of bichromal beads that rotate to create an image

Nick Sheridon and his team at Xerox PARC invented Gyricon in 1974, a thin layer of transparent plastic composed of bichromal beads that rotate with changes in voltage to create an image on the surface

Sheridon sought to develop a display that could match the contrast and readability of black ink on white paper. Along with his team of engineers at Xerox, Sheridon developed Gyricon, a substrate with thousands of microscopic plastic beads—each of which were half black and half white—suspended in a thin and transparent silicon sheet. Changes in voltage polarity caused either the white or black side of the beads to rotate up and display images and text without backlighting or special ambient light conditions.

After Xerox cancelled the Alto project in the early 1980s, Sheridon took his Gyricon technology in a new direction. By the late 1980s, he was working on methods to manufacture a new digital display system as part of the “paperless office.” As Sheridon explained later, “There was a need for a paper-like electronic display—e-paper! It needed to have as many paper properties as possible, because ink on paper is the ‘perfect display.’”

In 2000, Gyricon LLC was founded as a subsidiary of Xerox to develop commercially viable e-paper products. The startup opened manufacturing facilities in Ann Arbor, Michigan and developed several products including e-signage that utilized Wi-Fi networking to remotely update messaging. Unfortunately, Xerox shut down the entity in 2005 due to financial problems.

Pioneer of e-paper Nick Sheridon

Pioneer of e-paper, Nicholas Sheridan

Among the challenges Gyricon faced were making a truly paper-like material that had sufficient contrast and resolution while keeping manufacturing costs low. Sheridan maintained that e-paper displays would only be viable economically if units were sold for less than $100 so that “nearly everyone could have one.”

As Sheridon explained in a 2009 interview: “The holy grail of e-paper will be embodied as a cylindrical tube, about 1 centimeter in diameter and 15 to 20 centimeters long, that a person can comfortably carry in his or her pocket. The tube will contain a tightly rolled sheet of e-paper that can be spooled out of a slit in the tube as a flat sheet, for reading, and stored again at the touch of a button. Information will be downloaded—there will be simple user interface—from an overhead satellite, a cell phone network, or an internal memory chip.”

E Ink

By the 1990s competitors began entering the e-paper market. E Ink, founded in 1998 by a group of scientists and engineers from MIT’s Media Lab including Russ Wilcox, developed a concept similar to Sheridon’s. Instead of using rotating beads with white and black hemispheres, E Ink introduced a method of suspending microencapsulated cells filled with both black and white particles in a thin transparent film. Electrical charges to the film caused the black or white particles to rise to the top of the microcapsules and create the appearance of a printed page.

E Ink cofounder Russ Wilcox

E Ink cofounder Russ Wilcox

E Ink’s e-paper technology was initially implemented by Sony in 2004 in the first commercially available e-reader called LIBRIe. In 2006, Motorola integrated an E Ink display in its F3 cellular phone. A year later, Amazon included E Ink’s 6-inch display in the first Amazon Kindle which became by far the most popular device of its kind.

Kindle Voyage (2014) and Kindle Paperwhite (2015) with the latest e-paper displays (Carta) from E ink

Kindle Voyage (2014) and Kindle Paperwhite (2015) with the latest e-paper displays (Carta) from E ink

Subsequent generations of Kindle devices have integrated E Ink displays with progressively improved contrast, resolution and energy consumption. By 2011, the third generation Kindle included touch screen capability (the original Kindle had an integrated hardware keyboard for input).

The current edition of the Kindle Paperwhite (3rd Generation) combines back lighting and a touch interface with E Ink Carta technology and a resolution of 300 pixels per inch. Many other e-readers such as the Barnes & Noble Nook, the Kobo, the Onyx Boox and the PocketBook also use E Ink products for their displays.

Historical parallel

The quest to replicate, as closely as possible in electronic form, the appearance of ink on paper is logical enough. In the absence of a practical and culturally established form, the new media naturally strives to emulate that which came before it. This process is reminiscent of the evolution of the first printed books. For many decades, print carried over the characteristics of the books that were hand-copied by scribes.

It is well known that Gutenberg’s “mechanized handwriting” invention (1440-50) sought to imitate the best works of the Medieval monks. The Gutenberg Bible, for instance, has two columns of print text while everything else about the volume—paper, size, ornamental drop caps, illustrations, gold leaf accents, binding, etc.—required techniques that preceded the invention of printing. Thus, the initial impact of Gutenberg’s system was an increase in the productivity of book duplication and the displacement of scribes; it would take some time for the implications of the new process to work its way through the function, form and content of books.

Ornamented title page of the Gutenberg Bible printed in 1451

Ornamented title page of the Gutenberg Bible printed in 1451

More than a half century later—following the spread of Gutenberg’s invention to the rest of Europe—the book began to evolve dramatically and take on attributes specific to printing and other changes taking place in society. For example, by the first decade of the 1500s, books were no longer stationary objects to be read in exclusive libraries and reading rooms of the privileged few. As their cost dropped, editions became more plentiful and literacy expanded, books were being read everywhere and by everybody.

By the middle 1500s, both the form and content of books became transformed. To facilitate their newfound portability, the size of books fell from the folio (14.5” x 20”) to the octavo dimension (7” x 10.5”). By the beginning of the next century, popular literature—the first European novel is widely recognized as Cervantes’ Don Quixote of 1605—supplanted verse and classic texts. New forms of print media developed such as chapbooks, broadsheets and newspapers.

Next generation e-paper

It seems clear that the dominance of LCD displays on computers, mobile and handheld devices is a factor in the persistent affinity of the public for print books. Much of the technology investment and advancement of the past decade—coming from companies such as Apple Computer—has been been committed to computer miniaturization, touch interface and mobility, not the transition from print to electronic media. While first decade e-readers have made important strides, most e-books are still being read on devices that are visually distant from print books, impeding a more substantial migration to the new media.

Additionally, most current e-paper devices have many unpaper-like characteristics such as relatively small size, inflexibility, limited bit-depth and the inability to write ton them. All current model e-paper Kindles, for example, are limited to 6-inch displays with 16 grey levels beneath a heavy and fragile layer of glass and no support for handwriting.

The Sony Digital Paper System (DPT-S1) is based on E Ink’s Mobius e-paper display technology: 13.3” format, flexible and supports stylus handwriting

The Sony Digital Paper System (DPT-S1) is based on E Ink’s Mobius e-paper display technology: 13.3” format, flexible and supports stylus handwriting

A new generation of e-paper systems is now being developed that overcome many of these limitations. In 2014, Sony released its Digital Paper System (DPT-S1) that is a letter-size e-reader and e-notebook (for $1,100 at launch and currently selling for $799). The DPT-S1 is based on E Ink’s Mobius display, a 13.3” thin film transistor (TFT) platform that is flexible and can accept handwriting from a stylus.

Since it does not have any glass, the new Sony device weighs 12.6 oz or about half of a similar LCD-based tablet. With the addition of stylus-based handwriting capability, the device functions like an electronic notepad and, meanwhile, notes can be written in the margins of e-books and other electronic documents.

These advancements and others show that e-paper is positioned for a renewed surge into things that have yet to be conceived. Once a flat surface can be curved or even folded and then made to transform itself into any image—including a color image—at any time and at very low cost and very low energy consumption, then many things are possible like e-wall paper, e-wrapping paper, e-milk cartons and e-price tags. The possibilities are enormous.

Advertisements

The mobile juggernaut

Posted in Mobile, Mobile Media, Social Media with tags , , , , , , , on August 31, 2015 by multimediaman
Mark Zuckerberg

Mark Zuckerberg

On August 27, Mark Zuckerberg posted the following message on his personal Facebook account, “We just passed an important milestone. For the first time ever, one billion people used Facebook in a single day. On Monday, 1 in 7 people on Earth used Facebook to connect with their friends and family.”

The Facebook one-billion-users-in-a-single-day accomplishment on August 24, 2015 is remarkable for the social network that was started by Zuckerberg and a group of college dormitory friends in 2004. With Facebook becoming available for public use less than ten years ago, the milestone illustrates the speed and extent to which social media has penetrated the daily lives of people all over the world.

While Facebook is very popular in the US and Canada, 83.1% of the 1 billion daily active users (DAUs) come from other parts of the world. Despite being barred in China—where there are 600 million internet users—Facebook has hundreds of millions of active users in India, Brazil, Indonesia, Mexico, UK, Turkey, Philippines, France and Germany.

Facebook's "Mobile Only" active users.

Facebook’s “Mobile Only” active users.

A major driver behind the global popularity and growth speed of Facebook is the mobile technology revolution. According to published data, Facebook reached an average of 844 million mobile active users during the month of June 2015 and industry experts are expecting this number to hit one billion in the very near future. Clearly, without smartphones, tablets and broadband wireless Internet access, Facebook could not have achieved the DAU milestone since many of the one billion people are either “mobile first” or “mobile only” users.

From mobile devices to wearables

When I last wrote about mobile technologies two-and-half years ago, the rapid rise of smartphones and tablets and the end of the PC era of computing was a dominant topic of discussion. Concerns were high that significant resources were being shifted toward mobile devices and advertising and away from older technologies and media platforms. The move from PCs and web browsers toward apps on smartphones and tablets was presenting even companies like Facebook and Google with a “mobility challenge.”

Today, while mobile device expansion has slowed and the dynamics within the mobile markets are becoming more complex, the overall trend of PC displacement continues. According to IDC, worldwide tablet market growth is falling, smartphone market growth is slowing and the PC market is shrinking. On the whole, however, smartphone sales represent more than 70% of total personal computing device shipments and, according to an IDC forecast, this will reach nearly 78% in 2019.

IDC's Worldwide Device Market 5 Year Forecast

IDC’s Worldwide Device Market 5 Year Forecast

According to IDC’s Tom Mainelli, “For more people in more places, the smartphone is the clear choice in terms of owning one connected device. Even as we expect slowing smartphone growth later in the forecast, it’s hard to overlook the dominant position smartphones play in the greater device ecosystem.”

While economic troubles in China and other market dynamics have led some analysts to the conclude that the smartphone boom has peaked, it is clear that consumers all over the world prefer the mobility, performance and accessibility of their smaller devices.

Ercisson's June 2015 Mobility Report projects 6.1 billion smartphone users by 2020.

Ercisson’s June 2015 Mobility Report projects 6.1 billion smartphone users by 2020.

According to the Ericsson Mobility Report, there will be 6.1 billion smartphone users by 2020. That is 70% of the world’s population.

Meanwhile, other technology experts are suggesting that wearables—smartwatches, fitness devices, smartclothing and the like—are expanding the mobile computing spectrum and making it more complex. Since many wearable electronic products integrate easily with smartphones, it is expected this new form will push mobile platforms into new areas of performance and power.

Despite the reserved consumer response to the Apple Watch and the failure of Google Glass, GfK predicts that 72 million wearables will be sold in 2015. Other industry analysts are also expecting wearables to become untethered from smartphones and usher in the dawn of “personalized” computing.

Five mobile trends to watch

With high expectations that mobile tech will continue to play a dominant role in the media and communications landscape, these are some major trends to keep an eye on:

Wireless Broadband: Long Term Evolution (LTE) connectivity reached 50% of the worldwide smartphone market by the end of 2014 and projections show this will likely be at 60% by the end of this year. A new generation of mobile data technology has appeared every ten years since 1G was introduced in 1981. The fourth generation (4G) LTE systems were first introduced in 2012. 5G development has been underway for several years now and it promises speeds of several tens of megabits per user with an expected commercial introduction sometime in the early 2020s.

Apple's A8 mobile processor is 50 times faster than the original iPhone processor.

Apple’s A8 mobile processor is 50 times faster than the original iPhone processor.

Mobile Application Processors: Mobile system-on-a-chip (SoC) development is one of the most intensely competitive sectors of computer chip technology today. Companies like Apple, Qualcomm and Samsung are all pushing the capabilities and speeds of their SoCs to get the maximum performance with the least energy consumption. Apple’s SoCs have set the benchmark in the industry for performance and the iPhone6 contains an A8 processor which is 40% more powerful than the previous A7 chip; and it is 50 times faster than the processor in the original iPhone. A new processor A9 will likely be be announced with the next generation iPhone in September 2015 and it is expected to bring a 29% performance boost over the A8.

Pressure Sensitive Screens: Called “force touch” by Apple, this new mobile display capability allows users to apply varying degrees of pressure to trigger specific functions on a device. Just like “touch” functionality—swiping, pinching, etc.—pressure sensitive interaction with a mobile device provides a new dimension to human-computer-interface. This feature was originally launched by Apple with the release of the Apple Watch which has a limited screen dimension on which to perform touch functions.

Customized Experiences: With mobile engagement platforms, smartphone users can receive highly targeted promotions and offers based upon their location within a retail establishment. Also known as proximity marketing, the technology uses mobile beacons with Bluetooth communications to send marketing text messages and other notifications to a mobile device that has been configured to receive them.

Mobile Apps: The mobile revolution has been a disruptive force for the traditional desktop software industry. Microsoft is now offering its Office Suite of applications to both iOS and Android users free of charge. In August, Adobe announced that it would be releasing a mobile and full-featured version of its iconic Photoshop software in October as a free download and as part of its Creative Cloud subscription.

With mobile devices, operating systems, applications and connectivity making huge strides and expanding across the globe by the billions it is obvious that every organization and business should be navigating its way behind this technology juggernaut. This begins with an internal review of your mobile practices:

  • Do you have a mobile communications and/or operations strategy?
  • Is your website optimized for a mobile viewing experience?
  • Are you encouraging the use of smartphones and tablets and building a mobile culture within your organization?
  • Are you using text messaging for any aspect of your daily work?
  • Are you using social media to communicate with your members, staff, prospects or clients?

If the answer to any of these questions is no, then it is time to act.

AI and the future of information

Posted in Digital Media, Mobile, Social Media with tags , , , , , , , , , on June 30, 2015 by multimediaman
Amazon Echo intelligent home assistant

Amazon Echo intelligent home assistant

Last November, Amazon revealed its intelligent home assistant called Echo. The black cylinder-shaped device is always on and ready for your voice commands. It can play music, read audio books and it is connected to Alexa, Amazon’s cloud-based information service. Alexa can answer any number of questions regarding the weather, news, sports scores, traffic reports and your schedule in a human-like voice.

Echo has an array of seven microphones and it can hear—and also learn—your voice, speech pattern and vocabulary even from across the room. With additional plugins, Echo can control your automated home devices like lights, thermostat, kitchen appliances, security system and more with just the sound of your voice. This is certainly a major leap from “Clap on, Clap off” (watch “The Clapper” video from the mid-1980s here: https://www.youtube.com/watch?v=Ny8-G8EoWOw).

As many critics have pointed out, the Echo is Amazon’s response to Siri, Apple’s voice-activate intelligent personal assistant and knowledge navigator. Siri was launched as an integrated feature of the iPhone 4S in October 2011 and the iPad released in May 2012. Siri is also now part of the Apple Watch, a wearable device, that adds haptics—tactile feedback—and voice recognition along with a digital crown control knob to the human computer interface (HCI).

If you have tried to use any of these technologies, you know that they are far from perfect. As the New York Times reviewer, Farhad Manjoo explained, “If Alexa were a human assistant, you’d fire her, if not have her committed.” Often times, using any of the modern artificial intelligence (AI) systems can be an exercise in futility. However, it is important to recognize that computer interaction has come a long way since the transition from mainframe consoles and command line interfaces were replaced by the graphical, point and click interaction of the desktop.

What is artificial intelligence?

The pioneers of artificial intelligence theory: Alan Turing, John McCarthy, Marvin Minsky and Ray Kurzweil

The pioneers of artificial intelligence theory: Alan Turing, John McCarthy, Marvin Minsky and Ray Kurzweil

Artificial intelligence is the simulation of the functions of the human brain—such as visual perception, speech recognition, decision-making, and translation between languages—by man-made machines, especially computers. The field was started by the noted computer scientist Alan Turing shortly after WWII and the term was coined in 1956 by John McCarthy, a cognitive and computer scientist and Stanford University professor. McCarthy developed one of the first programming languages called LISP in the late 1950s and is recognized for having been an early proponent of the idea that computer services should be provided as a utility.

McCarthy worked with Marvin Minsky at MIT in the late 1950s and early 1960s and together they founded what has become known as the MIT Computer Science and Artificial Intelligence Laboratory. Minsky, a leading AI theorist and cognitive scientist, put forward a range of ideas and theories to explain how language, memory, learning and consciousness work.

The core of Minsky’s theory—what he called the society of mind—is that human intelligence is a vast complex of very simple processes that can be individually replicated by computers. In his 1986 book The Society of Mind Minsky wrote, “What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.”

The theory, science and technology of artificial intelligence have been advancing rapidly with the development of microprocessors and the personal computer. These advancements have also been aided by the growth in understanding of the functions of the human brain. The field of neuroscience has vastly expanded in recent decades our knowledge of the parts of the brain, especially the neocortex and its role in the transition from sensory perceptions to thought and reasoning.

Ray Kurzweil has been a leading theoretician of AI since the 1980s and has pioneered the development of devices for text-to-speech, speech recognition, optical character recognition and music synthesizers (Kurzweil K250). He sees the development of AI as a necessary outcome of computer technology and has written widely—The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999), The Singularity is Near (2005) and How to Create a Mind (2012)—that this is a natural extension of the biological capacities of the human mind.

Kurzweil, who corresponded as a New York City high school student with Marvin Minksy, has postulated that artificial intelligence can solve many of society’s problems. Kurzweil believes—based on the exponential growth rate of computing power, processor speed and memory capacity—that humanity is rapidly approaching a “singularity” in which machine intelligence will be infinitely more powerful than all human intelligence combined. He predicts that this transformation will occur in 2029; a moment in time when developments in computer technology, genetics, nanotechnology, robotics and artificial intelligence will transform the minds and bodies of humans in ways that cannot currently be comprehended.

Some fear that the ideas of Kurzweil and his fellow adherents of transhumanism represent an existential threat to society and mankind. These opponents—among them the physicist Stephen Hawking and the pioneer of electric cars and private spaceflight Elon Musk—argue that artificial intelligence will become the biggest “blow back” in history such as depicted in Kubrick’s film 2001: A Space Odyssey.

While much of this discussion remains speculative, anyone who watched in 2011 as the IBM supercomputer Watson defeated two very successful Jeopardy! champions (Ken Jennings and Brad Rutter) knows that AI has already advanced a long way. Unlike the human contestants, Watson was able to commit 200 million pages of structured and unstructured content, including the full text of Wikipedia, into four terabytes of its memory.

Media and interface obsolescence

Today, the advantages of artificial intelligence are available to great numbers of people in the form of personal assistants like Echo and Siri. Even with their limitations, these tools allow instant access to information almost anywhere and anytime with a series of simple voice commands. When combined with mobile, wearable and cloud computing, AI is making all previous forms of information access and retrieval—analog and digital alike—obsolete.

There was a time not that long ago when gathering important information required a trip—with pen and paper in hand—to the library or to the family encyclopedia in the den, living room or study. Can you think of the last time you picked up a printed dictionary? The last complete edition of the Oxford English Dictionary—all 20 volumes—was printed in 1989. Anyone born after 1993 is likely to have never seen an encyclopedia (the last edition of the Encyclopedia Britannica was printed in 2010). Further still, GPS technologies have driven most printed maps into bottom drawers and the library archives.

Instant messaging vs email communications

Among teenagers, instant messaging has overtaken email as the primary form of electronic communications

But that is not all.  The technology convergence embodied in artificial intelligence is making even more recent information and communication media forms relics of the past. Optical discs have all but disappeared from computers and the TV viewing experience as cloud storage and time-shifted streaming video have become dominant. Social media (especially photo apps) and instant messaging have also made email a legacy form of communication for an entire generation of young people.

Meanwhile, the advance of the touch/gesture interface is rapidly replacing the mouse and, with improvements in speech-to-text technology, is it not easy to visualize the disappearance of the QWERTY keyboard (a relic from the mechanical limitations of the 19th century typewriter)? Even the desktop computer display is in for replacement by cameras and projectors that can make any surface an interactive workspace.

In his epilogue to How to Create a Mind, Ray Kurzweil writes, “I already consider the devices I use and the cloud computing resources to which they are virtually connected as extensions of myself, and feel less than complete if I am cut off from these brain extenders.” While some degree of skepticism is justified toward Kurzweil’s transhumanist theories as a form of technological utopianism, there is no question that artificial intelligence is a reality and that it will be with us—increasingly integrated into us and as an extension of us—for now and evermore.

What is CRM and why do you need it?

Posted in Business systems, Mobile, Social Media with tags , , , , on April 24, 2015 by multimediaman
CRM Logos

CRM solutions (clockwise from top left) Salesforce.com, Microsoft Outlook Business Contact Manager, ACT! and SugarCRM.

I have used CRM software tools for more than ten years. Some of these were single user apps, some were client/server-based and included workgroup collaboration. Others were integrated with corporate-wide ERP systems and linked all departments together. Among the well-known solutions I have used are ACT!, Salesforce.com, SugarCRM and Microsoft Outlook Business Contact Manager.

Each of these has its strengths and weaknesses. Many functions and features are common to them all such as contact management, sales pipeline management, sales forecasting, etc. Each also has unique and distinguishing capabilities. Among the most important technical features of a CRM for me have been:

  • browser access
  • mobile app access
  • staff and management user levels
  • customizable dashboards
  • email client/server synchronization
  • APIs for ERP integration
  • automated email and text notifications for both staff and customers
  • custom and automatic report generation

The purpose of this article is to review the evolution and importance of customer relationship management as a business discipline and then explain some key lessons I have learned in my experience with CRM tools over the past decade.

Although it did not always have an acronym or business theory behind it, CRM has been practiced since the dawn of commerce. In short, customer relationship management is the methods that a business uses when interacting with customers. Although CRM is often associated with marketing, new business development and sales functions, it actually encompasses the end-to-end experience that customers have with an organization.

Therefore, customer relationship management is an important part of every business; how you manage your client relationships—from initial contact to account acquisition and development through delivery of products and services … and beyond—is vital to your future. It stands to reason that companies that are very good at customer relationship management are often among the most successful businesses.

Around the time that computers were used in business—especially the PC in the 1980s and the World Wide Web in the 1990s—the phrase customer relationship management and its acronym CRM began to acquire a specific meaning. By the late 1990s, entire schools of business thought were developed around strategies for the collection and handling of information and data about customer relations. CRM-specific technology platforms that place the customer at the center of business activity grew up around these theories.

In the first decade of the new century, the warehousing of customer information as well as the availability of demographic data about the population as a whole made it possible for CRM tools to be used for integrated and targeted marketing campaigns for new customer acquisition. Later, the growth of Big Data and cloud computing services moved CRM data out of the IT closet and made it available with software as a service (SaaS) solutions that are very flexible and can be deployed at any time and anywhere.

Most recently, social media has added another layer of information to CRM whereby companies can monitor or “listen” to dialogue between their organization and customers in real time.

CRM software industry growth

Source: Gartner Research

Business software industry experts are reporting that investment in CRM tools has been exploding and shows little sign of slowdown. According to an enterprise software market forecast by Gartner Research in 2013, total spending on CRM systems would pass that of ERP spending in 2016 and reach a total of $36 billion by 2017.

Cloud adoption by business functions

Source: Really Simple Solutions

The Gartner Research study also showed that by 2014 cloud-based CRM systems would represent 87% of the market, up from 12% in 2008. Meanwhile, in their Cloud Attitudes Survey, Really Simple Systems showed that cloud-based adoption by CRM users is more than double that of all other business functions including accounting, payroll, HR and manufacturing.

Mobile CRM adoption

Source: Gartner Research

Along with the growth of Cloud-based CRM solutions—and also driving it—is mobile technology. According to Gartner Research, mobile CRM adoption experienced the following in 2014:

  • 500% growth rate in the number of apps rising from 200 to 1,200 on mobile app stores
  • 30% increase in the use of tablets by the sales people
  • 35% of businesses have been moving toward mobile CRM apps

While these trends show that expectations are very high that increased CRM resources and investment will produce improved business results, there are countervailing trends that the path forward is far from a straight line. A survey by DiscoverOrg showed that nearly one quarter of all businesses do not have any CRM system. Additionally, one industry study shows that many organizations face setbacks during implementation and some (25-60%) fail to meet ROI targets.

Finally, other research shows that companies that have invested in CRM tools do not take advantage of some 80% of their potential benefits, especially integration and extension throughout the entire organization. All of the above statistics correspond with my own experience. While decision makers and business leaders have expectations that a CRM solution will significantly impact their bottom line, the challenges of implementation can be daunting and bog down the effort quickly.

Therefore, it is critical to have a CRM implementation plan:

  • Develop an integrated CRM strategy that places the customer at the center of all company departments and functions.
  • Map your IT infrastructure and identify all centers of customer data.
  • Evaluate, select and test a technology solution that is appropriate for your organization.
  • Utilize IT resources to build an architecture that will bring all or most of your customer data together within one system.
  • Identify champions in each department and build support and buy-in for the CRM throughout the company.
  • Work on your data quality and make sure that the information that is going into the system at startup does not compromise the project.
  • Provide training and share success stories to encourage everyone to use the system throughout the day.

In our intensely competitive environment, it is clear that CRM tools can enable an organization to effectively respond to multiple, simultaneous and complex customer needs. Every department—marketing, sales, customer service, production, shipping and accounting—has a critical role to play in building the customer database and using the CRM.

The following conclusions are derived from my experience:

  1. Few companies have implemented CRM technologies and even when CRM tools are available, few people embrace and use them.
  2. Those with effective CRM implementations are significantly outperforming the competition on the service and communication side of their business.
  3. The best and most successful companies connect their CRM infrastructure with business strategy and make its use part of their corporate culture.

Is your head in The Cloud or in the sand?

Posted in Digital Media, Mobile with tags , , , , , , , on August 18, 2014 by multimediaman

The Cloud is everywhere all the time; it knows who you are, where you are and it is casting its shadow upon you right now. Driven by shifts in technology and culture, The Cloud is part of our personal and professional lives whether we like it or not. If you have a Facebook account, your Timeline is in The Cloud; if you have a Flickr account, your photos are in The Cloud; if you have a Netflix account, the movies you watch are stored in The Cloud; if you have a DropBox account, your documents are in The Cloud.

The Cloud or cloud computing has many forms. One can think of it as computing as a utility instead of with a piece of electronic hardware, a device or a program that you own. Cloud computing is associated with shared computer resources such as data storage systems or applications over the Internet.

Popular providers of cloud computing products and services: (clockwise from top left) Apple iCloud, Amazone Cloud Drive, Adobe Creative Cloud, Microsoft SkyDrive, Oracle Cloud Computing and IBM Cloud

Popular providers of cloud computing products and services: (clockwise from top left) Apple iCloud, Amazone Cloud Drive, Adobe Creative Cloud, Microsoft SkyDrive, Oracle Cloud Computing and IBM Cloud

In contrast to the personal computing model—where every system has unique copies of software and data and a large local storage volume—cloud computing distributes and replicates these assets on servers across the globe. Historically speaking, The Cloud is a return—in the age of the Internet, apps and social media—to the time-sharing terminal computing model of the 1950s. It maintains computer processes and data functions centrally and enables users to access them from anywhere and at any time.

The phrase “The Cloud” was originally used in the early 1990s as a metaphor to describe the Internet. Beginning in 2000, the technologies of cloud computing began to expand exponentially and since then have become ubiquitous. Solutions like Apple’s .Mac (2000), MobileMe (2008) and finally iCloud (2011) have enabled public familiarity with cloud computing models. Certainly the ability to access, edit and update your personal digital assets—documents, photos, music, video—from multiple devices is a key feature of The Cloud experience.

The development and proliferation of cloud file sharing (CFS) systems such as DropBox, Google Drive and Microsoft SkyDrive—offering multiple gigabytes of file storage for free—have also driven mass adoption. Some industry analysts report that there are more than 500 million CFS users today.

Beside benefits for the consumer, cloud-based solutions are being offered by enterprise computing providers such as IBM and Oracle with the promise of significant financial savings associated with shared and distributed resources. In fact, The Cloud has become such an important subject today that every supplier of computer systems—as well as online retailers like Amazon—is hoping to cash in on the opportunity by offering cloud solutions to businesses and consumers.

For those of us in the printing and graphic arts industries, a prototypical example of cloud computing is Adobe’s Creative Cloud. Adopters of Adobe CC are becoming accustomed to monthly software subscription fees as opposed to a one-time purchase of a serialized copy as well as shared data storage of their creative graphics content on Adobe’s servers.

Digital Convergence

The concepts of digital convergence were developed and expanded by Ihtiel de Sola Pool, Nicholas Negroponte and John Hagel III

The concepts of digital convergence were developed and expanded by Ithiel de Sola Pool, Nicholas Negroponte and John Hagel III

In a more general sense, The Cloud is part of the process of digital convergence, i.e. the coming together of all media and communications technologies into a unified whole. The concept of technology convergence was pioneered at MIT by the social scientist Ithiel de Sola Pool in 1983. In his breakthrough book Technologies of Freedom, De Sola Pool postulated that digital electronics would cause the modes of communication—telephone, newspapers, radio, and text—to combine into one “grand system.”

Nicholas Negroponte, founder of the MIT Media Lab, substantially developed the theory of digital convergence in the 1980s and 1990s. Long before the emergence of the World Wide Web, Negroponte was foretelling that digital technologies were causing the “Broadcast and Motion Picture Industry,” the “Computer Industry” and the “Print and Publishing Industry” to overlap with each other and become one. As early as 1978, Negroponte was predicting that this process would reach maturity by the year 2000.

At the center of digital convergence—and the growth and expansion of The Cloud—is the acceleration of electronic technology innovation. John Hagel III of The Center for the Edge at Deloitte has identified the following technological and cultural components that are responsible for this accelerated development.

Infrastructure and Users

The cost/performance trends of core digital technologies are closely associated with Moore’s Law, i.e. that the stated number of transistors on an affordable CPU doubles every two years. By extension this law of exponential innovation can also be applied to other digital technologies such as storage devices and Internet bandwidth. In simple terms, what this means is that the quantity of information that can be processed, transmitted and stored per dollar spent is accelerating over time. The development of digital convergence and of cloud computing is entirely dependent upon these electronic technology shifts. The following graphs illustrate this:

The cost of computing power has decreased significantly, from $222 per million transistors in 1992 to $0.06 per million transistors in 2012. The decreasing cost-performance curve enables the computational power at the core of the digital infrastructure.

The cost of computing power has decreased significantly, from $222 per million transistors in 1992 to $0.06 per million transistors in 2012. The decreasing cost-performance curve enables the computational power at the core of the digital infrastructure.

Similarly, the cost of data storage has decreased considerably, from $569 per gigabyte of storage in 1992 to $0.03 per gigabyte in 2012. The decreasing cost-performance of digital storage enables the creation of more and richer digital information.

Similarly, the cost of data storage has decreased considerably, from $569 per gigabyte of storage in 1992 to $0.03 per gigabyte in 2012. The decreasing cost-performance of digital storage enables the creation of more and richer digital information.

The cost of Internet bandwidth has also steadily decreased, from $1,245 per 1000 megabits per second (Mbps) in 1999 to $23 per 1000 Mbps in 2012. The declining cost-performance of bandwidth enables faster collection and transfer of data, facilitating richer connections and interactions.

The cost of Internet bandwidth has also steadily decreased, from $1,245 per 1000 megabits per second (Mbps) in 1999 to $23 per 1000 Mbps in 2012. The declining cost-performance of bandwidth enables faster collection and transfer of data, facilitating richer connections and interactions.

Culture: Installed Base

Tracking closely with the acceleration of computer technology innovation—and also driving it—is the adoption of rate of these technologies by people. Without the social and practical implementation of innovation, digital convergence and The Cloud could not have moved from the laboratory and theoretical possibility into modern reality. Both the number of Internet users and wireless subscriptions are core to the transformations in human activity that are fueling the shift from the era of the personal computer to that of mobile, social media and cloud computing.

Additionally, the use of the Internet continues to increase. From 1990 to 2012, the percent of the US population accessing the Internet at least once a month grew from near 0 percent to 71 percent. Widespread use of the Internet enables more widespread sharing of information and resources.

Additionally, the use of the Internet continues to increase. From 1990 to 2012, the percent of the US population accessing the Internet at least once a month grew from near 0 percent to 71 percent. Widespread use of the Internet enables more widespread sharing of information and resources.

More and more people are connected via mobile devices. From 1985 to 2012, the number of active wireless subscriptions relative to the US population grew from 0 to 100 percent (reflecting the fact that the same household can have multiple wireless subscriptions). Wireless connectivity is further facilitated by smartphones. Smart devices made up 55 percent of total wireless subscriptions in 2012, compared to only 1 percent in 2001.

More and more people are connected via mobile devices. From 1985 to 2012, the number of active wireless subscriptions relative to the US population grew from 0 to 100 percent (reflecting the fact that the same household can have multiple wireless subscriptions). Wireless connectivity is further facilitated by smartphones. Smart devices made up 55 percent of total wireless subscriptions in 2012, compared to only 1 percent in 2001.

Innovation Comparison

The full implications of these changes are hard to comprehend. Some experts point out that previous generations of disruptive technology—electricity, telephone, internal combustion engine, etc.—have, after an initial period of accelerated innovation, been followed by periods of stability and calm. In our time, the cost/performance improvement of digital technologies—and the trajectory of Moore’s Law—shows no sign of slowing down in the foreseeable future.

While it is increasingly difficult to keep up with the demands of this change, we are compelled to do so. The fact is that we have been in The Cloud for some time now means that our conceptions and plans must be reflective of this reality. We cannot attempt to hide from The Cloud in our personal and professional affairs anymore than we could have hidden from the personal computer or the smartphone. The key is to embrace The Cloud and find within it new opportunities for harnessing its power to become more effective and successful in our daily lives and business offerings to customers.

2013: A big year for Big Data

Posted in Digital Media, Mobile, Social Media with tags , , , , , , on December 7, 2012 by multimediaman

The year 2013 will be important for a couple of reasons. Believe it or not, 2013 marks the twentieth anniversary of the World Wide Web. It is true that Tim Berners-Lee developed the essential technologies of the web at CERN laboratory in Switzerland in 1989-90. However, it was the first graphical browser called Mosaic—developed by a team at the National Center for Computer Applications at the University of Illinois-Urbana—in April 1993 that made the web enormously popular.

ncsa-mosaic

Marc Andreessen, developer of the first graphical web browser Mosaic in 1993.

Marc Andreessen, developer of Mosaic the first graphical web browser in 1993.

Without Mosaic, brainchild of UI-U NCSA team member Marc Andreessen, the explosive growth of the web in the 1990s could not have happened. Mosaic brought the web outside the walls of academia and transformed it into something that anyone could use. In June 1993 there were only 130 web sites; two years later there were 230,000 sites. In 2007 there were 121 million web sites; it is estimated that there are now 620 million web sites. Now that qualifies as exponential growth.

This brings me to the second reason why this year is important: worldwide digital information will likely surpass 4 zettabytes of data in 2013. This is up from 1.2 zettabytes in 2010. Most of us are familiar with terabytes; a zettabyte is 1 billion terabytes. In between these two are petabytes (1 thousand terabytes) and exabytes (1 million terabytes). 2013 is going to be a big year for Big Data.

Companies that grew up in the age of the World Wide Web are experts at Big Data. As of 2009, Google was processing 24 petabytes of data each day to provide contextual responses to web search requests. Wal-Mart records one million consumer transactions per hour and imports them into a database that contains 2.5 petabytes. Facebook stores, accesses and analyzes 30+ petabytes of user-generated data.

DataTerms

The expansion of worldwide Big Data and the metric terms to describe it (yottabytes or 1,000 zettabytes are coming next—beyond that is TBD) has become the subject of much discussion and debate. Big Data is most often discussed in terms of the four V’s: volume, velocity, variety and value.

Volume

The accumulation of Big Data volume is being driven by a number of important technologies. Smartphones and tablets and social media networks Facebook, YouTube and Twitter are important Big Data sources. There is another less visible, but nonetheless important, source of Big Data: it is called the “Internet of Things.” This is the collection of sensors, digital cameras and other data gathering systems (such as RFID tags) attached to a multitude of objects and devices all over the world. These systems are generating enormous amounts of data 24/7/365.

Velocity

The speed of Big Data generation is related to the expansion and increased performance of data networks both wired and wireless. It is also the result of improved capturing technologies. For example, one minute of high definition video generates between 100 and 200 MB of data. This is something that anyone with a smartphone can do and is doing all the time.

Variety

The Big Data conversation is more about the quality of the information than it is about the size and speed. Our world is full of information that lies outside structured datasets. Much of it cannot be captured, stored, managed or analyzed with traditional software tools. This poses many problems for IT professionals and business decision makers; what is the value of the information that is largely “exhaust data”?

Value

There are good internal as well as external business reasons for sharing Big Data. Internally, if exhaust data is missed in the analytical process, executives are making decisions based upon intuition rather than evidence. Big Data can also be used externally as a resource for customers that otherwise would be unable to gain real-time access to detailed information about the products and services they are buying. It is the richness and complexity of Big Data that makes it so valuable and useful for both the executive process and customer relationships.

Every organization today is gathering Big Data in the course of its daily activities. In most cases, the bulk of the information is collected in a central EMS or ERP system that connects the different units and functional departments of the organization. But more likely than not, these systems are insufficient and cannot support all data gathering activities within the organization. There are probably systems that have been created ad-hoc to serve various specialized needs and solve problems that the centralized system cannot address. The challenge of Big Data is to capture all ancillary data that is getting “dropped to the floor” and make it useful by integrating it with the primary sources.

Making Big Data available offers organizations the ability to establish a degree of transparency internally and externally that was previously impossible. Sharing enables organization members and customers to respond quickly to rapidly changing conditions and circumstances. Some might argue that sharing Big Data is bad policy because it allows too much of a view “behind the curtain.” But the challenge for managers is to securely collect, store, organize, analyze and share Big Data in a manner that makes it valuable to those who have access and can make use of it.

I remember—upon downloading the Mosaic browser in 1993 with my dial up connection on my desktop computer—how thrilling it was to browse the web freely for the first time. It seemed like Mosaic was the ultimate information-gathering tool. I also remember how excited I was to get my first 80 MB hard disk drive for data storage. The capacity seemed nearly limitless. As we look back and appreciate the achievements of twenty years ago, we now know that those were really the beginnings of something enormous that we could not have fully predicted at the time.

With the benefit of those experiences—and many more over the past two decades of the transition from analog to online and electronic media—it is important to comprehend as best one can the meaning of Big Data in 2013 and where it is going. Those organizations that recognize the implications and respond decisively to the challenges of the explosive growth of structured and unstructured data will be the ones to establish a competitive advantage in their markets.

Digital trends: Where’s your camera?

Posted in Digital Media, Mobile, Photography with tags , , , , , on August 30, 2012 by multimediaman

In June 1994, I bought my first digital camera: an Apple QuickTake 100. It was the first consumer-level digital camera and cost about $695. Developed jointly by Apple and Kodak, it was a fascinating breakthrough device.

On the day I bought the camera, I connected it via serial cable to my Mac, installed the QuickTake 1.0 software (from a floppy disk) and downloaded the first digital photos I had ever taken. I brought the pictures into Photoshop and started editing them; these were images that did not come from film and did not require scanning. Wow, I thought, how much time am I going to save with this nifty little camera.

Well, not so fast. The images had a resolution of 640×480 pixels (about one third of a megapixel in today’s terms) and were not very useful for print reproduction. But they were perfect for standard definition video display and I could see how they could be used in presentations and slide shows.

Over the next few years, while I was fiddling around with the novelty of digital photography, I continued using my Canon 35mm SLR to shoot film negatives and transparencies. I’d shoot rolls of film and drop them off at the local camera store for processing and print making and continued to do this for many more years. It wasn’t until 2000 that I made the transition permanently to digital photography.

Fast forward to 2012 … Last weekend, for the first time I deposited a check into my bank account using the mobile banking app on my iPhone. I also shot a video and took photos of a family picnic in my back yard and posted the photos and video to my Facebook page immediately. I was even able to assemble and edit my video clips using the iMovie app on my iPhone.

And, on the same weekend, I saw someone using an iPad to shoot video of a football scrimmage … they were using the iPad screen as a viewfinder as they followed the players down the football field.

Needless to say, in the 18 years between these different experiences, camera technology has undergone a transformation. The last two decades have seen the replacement of conventional film photography with digital photos, but also more recently, the displacement of single purpose digital cameras (both video and still) by smartphones.

The pace and magnitude of these dual transformations are seen clearly in the answers to the following questions:

When did digital photography eclipse film photography?
In 1990 100% of photography was analog/film based. Ten years later, in 2000, just 99% of photography was still analog while 1% was digital. The big change took place over the past decade. By 2011, 99% of photography was digital and 1% film.

How many photos are being taken?
It has been estimated (by 1000memories blog) that since photography was first invented in 1838, there have been 3.5 trillion pictures taken. Today, every two minutes, we snap as many photos as were taken by all of humanity in the entire 19th century. In 1990 there were 57 billion photos taken, in 2000 there were 86 billion taken and in 2011 there were 380 billion taken.

Are mobile and smartphones replacing cameras and camcorders?
It has been estimated (by NPD Group) that in 2010 camera phones accounted for 17% of all images while point and shoot and camcorders accounted for 52%. In just one year, these numbers changed to 27% by camera phones and 44% by point and shoot and camcorders. The balance of the imagery is still dominated by higher end digital photographic and video equipment.

Where are all the digital photos being stored?
The biggest library of online photos is Facebook. It has been estimated (by pixable blog) that over 100 billion photos have been uploaded into Facebook its by users. The following is a list of the top photo sharing sites and their image volumes:

  • Photobucket: 10 billion photos
  • Picasa: 7 billion photos
  • Flickr: 6 billion photos
  • Instagram: 400 million

Instagram is the fastest growing online photo sharing technology and it was purchased by Facebook earlier this year for $1 billion.

The ubiquity and ease of use of cameras on smartphones—capable of shooting high quality color photos and video—combined with social networking and photo sharing have led to an explosion in digital photography. Almost anyone can capture a scene at any time and people are doing it, all the time.

As with other developments in our digital world, a transformation of one kind—the replacement of film by digital photography—is not fully completed when a transformation of another kind—the replacement of digital point-and-shoot cameras and camcorders by camera and smart phones—accelerates the entire process and evolves in an unanticipated direction.

It is these sudden and unexpected twists that make navigating the business environment such a complex task. The challenges facing Kodak, which filed for bankruptcy reorganization last January, is an expression of the way these rapid changes can impact companies and entire industries. Once the king of analog photographic equipment and supplies as well as an originator of the digital camera revolution, Kodak announced on August 23 that it was selling off its film division.

The ability to see and understand the convergence and successive waves of digital transformation, and the way these impact the behavior of our customers, is the only way to keep pace in our rapidly changing world and make plans for the future.