THE VALUE PARADOX: ACCOUNTING FOR INTANGIBLE ASSETS

Một phần của tài liệu Vaporized solid strategies for success in a dematerialized world (Trang 87 - 96)

“Information wants to be free,” writer Stewart Brand famously quipped in his book The Media Lab:

Inventing the Future at mit, in 1985. “Information also wants to be expensive . . . That tension will not go away.”

Brand’s gnomic aphorism has emerged as the central paradox of the Vaporized Era. We’re awash in information. It fills our inboxes and laptops, clogs our networks, and leaves us feeling chronically overworked and unfinished. And yet we are instinctively aware that this torrent of data is valuable, so we spend an ever-increasing amount of time and money managing the tide. Of all of humanity’s

various milestones in these tumultuous early years of the twenty-first century, one achievement stands

out. We are champions of generating new information.

Every year since 2005, EMC Corporation, a data storage company, has sponsored the idc Digital Universe Study, which estimates the amount of information generated on digital networks. We are doubling the amount of data every eighteen months and, according to the report, in 2010 human society crossed an important threshold. We collectively generated a zettabyte (1 ZB) of data.

Most people are familiar with smaller units of measurement, such as kilobytes and megabytes, because we get reminders from our email systems to limit the size of attachments. One thousand megabytes is a gigabyte, and 1,000 gigabytes makes a terabyte. If you’ve backed up your family

photos on an external hard drive lately, you’ve probably used a terabyte drive. Now imagine 1,000 of those terabyte hard drives and you’ve got a petabyte. A thousand petabytes is an exabyte, which until recently was the biggest measure used to describe data.

In the 2000s the data storage industry measured the total output of human information generation in tens of exabytes. Now we’re going to the next level. A zettabyte is a thousand exabytes; in other words, it’s 1021 bytes.

To put it in perspective, open your desk drawer and fish out a one-gigabyte USB flash drive. It’s just about one inch long. A decade ago, a gigabyte was more information than the most powerful desktop computer had on board: today we give away one-gig flash drives for free at trade shows.

Now imagine putting 100 similar flash drives in a line and then making 100 more lines. You’d end up with a 100 × 100 grid of flash drives, which would make a rectangle approximately ten feet by six feet (depending upon the exact size of your flash drive). That rectangle is a terabyte.

Suppose that you stacked 100 more rectangles exactly like this on top of each other to form a cube of flash drives. The cube is a petabyte. Now picture 1,000 such cubes. It would cover a football field.

That’s an exabyte. Finally, imagine a million such cubes! That’s a zettabyte. We’re talking about city blocks covered with flash drives stacked 100 deep. A zettabyte is one million million gigabytes.

To illustrate how much data is in a petabyte, the authors of the IDC report, John Gantz and David Reinsel, wrote: “Picture a stack of DVDs reaching from the earth to the moon and back.” One zettabyte is a million times greater than that. That’s how much information human society produced in 2010.

And we keep creating more. By 2011, the IDC report says, we nearly doubled that amount to 1.8 ZB. And by 2012 we generated 2.8 ZB. In 2013 we generated 4.4 ZB. Every year the estimate in the report is revised—upwards. At this pace, we’ve already outstripped IDC’s prediction of 40 ZB by 2020. The authors recently revised their estimate, to 44 ZB. At that point, Gantz and Reinsel’s stack of DVDs will reach halfway to Mars.

According to Chuck Hollis, the former chief technology officer at EMC, that means in 2020 at least 1.7 megabytes of data will be generated for every person on the planet—every second of every day.

There will be 5,200 gigabytes of data for every person on the planet. That data is not all photos and videos that we share on social networks. In fact, far more data is generated about us than we generate directly via sharing. Each year we are connecting more and more machines to the Internet, and they will begin to contribute even more information to the data pile than humans do.

In economic terms, this absurd oversupply suggests that information should be incredibly cheap.

The information is often freely available—even if it takes some effort to gather it and organize it—but it’s not really free, as in cheap. In the right organization, data is incredibly valuable. Yet, today very few chief executives can cite how much their data assets are really worth.

INFONOMICS TO THE RESCUE

This value paradox lies at the heart of a new theory called infonomics, or how businesses determine the economic value of their information, a term coined by analyst Doug Laney of Gartner Research.

He points out that information behaves like no item on a corporate balance sheet, and that, lacking a generally agreed means to measure data’s worth, accountants typically don’t include it in the

company’s books as an asset. As a result, insurance companies explicitly exclude coverage for data assets, which means that if your server crashes and your corporate data is wiped out, don’t even think about submitting a claim!

This challenge to account for the value of data leaves companies teetering in a precarious spot as ever-greater chunks of their business process and products are vaporized into pure digital

information. Moreover, Laney points out that there is a gap between the potential value and the

realized value of information assets. Unless it is managed effectively, data is a liability, not an asset.

It costs a lot of money to collect and archive data, and it’s even more expensive to process that information in order to arrive at useful insights. Experts must be hired, and expensive equipment and software are required. Some databases must be licensed or purchased; others are obtained via barter or exchange with partner firms. These costs add up quickly, which makes it necessary for managers to quantify the value of data in order to justify these outlays. That’s easier said than done.

Laney proposes several methods of establishing the value of intangible corporate data assets: the estimated cost to replace it, the potential price it might fetch in the open market, or the amount it contributes to a revenue stream. Another approach is to measure the impact on internal business operations.

But arriving at consensus on the value of data as an asset is just the first step towards measuring the total value of the Vaporized Economy. The standard methods of measuring gross domestic product (GDP) do not contemplate the process of replacing physical goods with information. A new

smartphone comes loaded with dozens of vaporized substitutes for the camera, video recorder, game console, MP3 player, map, compass, alarm clock, notebook, and dozens of other products, but the only item recorded in the national GDP is the smartphone. The rest of those products just disappear off the balance sheet. As the vaporized portion of the economy grows and expands, it could lead to a

paradoxical outcome: a shrinking GDP.

Our collective inability to measure and account for the value of the rapidly expanding intangible portion of the economy is the biggest self-imposed obstacle on the path to the Vaporized Economy. As more and more portions of industry, commerce, society, and culture migrate to digital vapor, the

quantity and value of information assets will continue to increase but our ability to measure them and invest adequately to manage them languishes. Closing the valuation gap will emerge as a major

priority for business leaders.

The biggest conceptual challenge to traditional companies and conventional thinkers will be to perceive their intangible data asset as their most valuable property. It’s difficult to break the habit of prioritizing tangible, physical stuff. However, as the process of vaporizing things proceeds, those physical assets may be sold or written off as junk. Proprietary systems and processes may be

upgraded and replaced, but information will reign supreme as an ever-growing asset and the primary driver of profitability.

Those who already see this opportunity can position themselves for a data-driven future. They can develop internal expertise at managing and analyzing huge data sets, and they will gather data from as

many sources as possible. Many managers fail to see this future, however. Blinded by old-fashioned accounting practices and a sentimental attachment to physical facilities and tangible products, they will miss the opportunity to convert their business, as Linotype did, to an entirely information-based enterprise. They continue to underinvest in data systems and the skill sets to manage them, falling further behind and quite possibly riding their physical infrastructure into obsolescence and obscurity.

As Laney wrote in the Financial Times: “Ultimately, executives who just continue to talk about information as one of their company’s most critical assets, yet continue to eschew measuring and managing it as one, are doomed to continue having underperforming information assets. It may mean underperforming businesses as well.”

USING SENSOR NETWORKS TO MAKE THE INVISIBLE VISIBLE

The notion that the entire world can be measured and described with numbers may be as old as the Greek philosopher Pythagorus, but until recently we lacked the means to measure everything with sufficient precision to be useful. Today we find ourselves on the brink of that fascinating possibility.

It’s an axiom of business management that whatever is measured can be improved: for the better part of a century, business managers have been measuring what they could, optimizing where they can, and taking an educated guess at the rest of the operation. There remained a big gap between theory and practice. But that’s changing fast with the introduction of cheap sensors, ever-more powerful microprocessors, and always-on wireless connectivity. The combination of these low-cost

components makes it possible to measure millions, even billions, of micro-actions that were

previously unrecorded. Data analysis tools make it possible to convert that information instantly into useful charts and gauges that provide a vivid visualization of those millions of actions.

This new information brings clarity to business management decisions that were previously based on murky perceptions or pure gut instinct. Measuring exactly the number of cars that drive past two particular parcels of real estate each day helps tell a developer which is the better location for a new shopping center. Counting precisely the number of coho salmon that escape upstream to spawn each day helps the boards of the Alaska Department of Fish and Game manage stocks in the coastal

fisheries. Recording the level of pH in the water surrounding coral reefs constantly provides evidence of the impact of climate change on undersea life.

Sensors are everywhere today: on oceangoing buoys to measure ocean current, temperature, and even sub-sea seismic activity; on cattle to measure location and temperature, even bovine methane- gas releases (I am not making this up!); and in Asian airports to measure the body temperature of passengers to spot the outbreak of severe acute respiratory syndrome (SARS) and other communicable diseases.

We are gradually moving into an era in which everything that can be measured will be. What’s driving this change is the combined impact of Moore’s law and Metcalfe’s law on the technology for data collection. As we’ve seen with semiconductors, computer memory, and other computer

components, Moore’s law tells us that as demand rises, the prices don’t rise; instead, they fall dramatically even as the performance increases. In other words, sensors will continue to become smarter, smaller, more energy efficient, and cheap enough that they will be deployed everywhere.

And Metcalfe’s law tells us that as these tiny devices are connected, the value of the entire network of deployed sensors will continue to grow exponentially as more are added.

Measuring everything means optimizing everything. With real-time data, firms can lower their

costs, eliminate inefficiency, reduce spoilage, and deliver products where they are most needed in a timely way. However, Big Data isn’t exclusively for Big Business. The law of increasing returns means that the first company to enter a category has an unprecedented opportunity to achieve a winner-take-all effect. How is your company positioned? There’s plenty of opportunity in data for feisty startup companies.

How SteadyServ is using data and smart devices to reinvent brewing

The prospect of measuring what was previously unmeasurable opens up unexplored terrain. That’s the kind of opportunity ideally suited to nimble startups that don’t have a legacy business to defend.

For imaginative entrepreneurs, vast networks of cheap sensors will give them x-ray vision akin to a comic book superhero, the ability to see what was previously invisible. That’s an insanely unfair first-mover advantage. Such x-ray vision makes it possible to find opportunities in even the oldest, most traditional industries. Even in a truly ancient industry like brewing.

Apart from the addition of stainless steel brewing vats, the process of brewing beer hasn’t changed much in the past 1,000 years. And yet Steve Hershberger, a technology entrepreneur who built

systems for tracking industrial manufacturing and distribution, uncovered an opportunity in the

beverage industry. As a hobby, Hershberger once invested in a microbrewery based in his hometown of Indianapolis, Indiana. From the outset he applied the tech-geek discipline of measuring everything:

borrowing from his experience in Silicon Valley, Hershberger developed a set of key performance indicators (KPIs) for the brewery to track every measurable aspect of the brewing process.

As the fledgling brewery began to win awards and prosper, Steve shifted his attention to the distribution business. He wanted to learn about the data systems used by the beer distributors and their customers. He was mildly stunned to realize that there were no such systems in the entire

beverage industry. Zilch. Zip. Nada. What Steve learned about the beverage business was shocking to an info-tech geek.

How does a bartender know how much beer is left in a keg? He lifts the metal canister by one edge, swirls the contents inside, and makes his best wild-ass guess. Sometimes the bartender guesses right and sometimes he is wrong: if he’s wrong, the bar will run out of a popular brew on a busy night. Unhappy patrons won’t stick around for another round. Multiplied across the 500,000 bars in the United States, this rudimentary technique leads to billions of dollars of lost income.

How does a beverage company find out which beers customers are ordering? Today it sends college students equipped with clipboards and pencils to ask patrons in bars. It’s hard to imagine a less reliable source of information for a mature $100 billion industry than the alcohol-saturated memories of a partying crowd after happy hour.

The beverage and hospitality industries lacked a system for tracking the consumption of draft beer in real time. To Hershberger this was a golden opportunity to apply everything he knew about

information technology to the ancient brewing business. In 2012 he started a new company called SteadyServ.

To solve the data problem in the beverage industry, SteadyServ created a product called iKeg that consists of a metal ring loaded with sensors and a wireless transmitter. The ring is attached to the bottom of a keg of beer and provides a steady stream of information based on the weight and pressure changes. The information is relayed wirelessly to SteadyServ’s cloud, where it is converted into usable structured data. This information is then streamed to specialized mobile apps designed to

optimize decision-making all the way through the brewing value chain:

> SteadyServ’s mobile apps enable a bartender to keep track of all of the inventory in-house and order fresh beer with a single touch on the screen.

> Another app alerts the distributor automatically when a keg is nearly empty, thereby ensuring that a replacement will be on the delivery truck.

> The app for bar owners enables the proprietor to monitor the sales—by the glass, in real time—

of every keg of beer in-house.

> A single integrated data feed in the app allows bar owners who manage several establishments to manage them all remotely, so they can compare the performance of one bar against another.

> More broadly, the aggregate data generated across all participating bars in a given city or region can provide beverage marketers and distributors with useful insight into breaking trends in

different regions, even neighborhoods.

Ultimately, brewers will be able to use the data generated by SteadyServ to test and fine-tune marketing campaigns and promotions and to improve their product lineup.

The SteadyServ story is a microcosm of the Big Data trend that is sweeping across every mature manufacturing business in the world. Entrepreneurs in every field are using connected sensors to discover and analyze new fields of information, converting it from random data smog into useable insight to drive old-school manufacturing and distribution companies to greater efficiency.

With SteadyServ, Steve Hershberger developed a new technological tool to collect data that allowed him to run his business more efficiently. This isn’t the only way that a traditional business can develop a proprietary data asset. In some cases, no invention is necessary: the data that can transform your business and give you a critical competitive edge in the Vaporized Economy might already be in your pocket. But you need to figure out how to unlock it and utilize it before your competitors do it first.

WHAT GOOGLE KNOWS

In its quest to organize the world’s information, Google has scoured vast troves of data to amass the greatest accumulation of information assets on the planet, including the billions of search queries on google.com and YouTube and the billions of interactions on Android, the dominant operating system for most mobile devices. Google also controls an ever-growing index of the world’s websites and the browsing history of more than 2 billion users, three types of maps of the Earth’s surface and traffic patterns, a real-time list of trending topics, the largest archive of discussions in Usenet groups, the entire contents of 20 million books, a huge collection of photographs, the largest collection of video on the planet, the largest online email repository, even the largest archive of DNA data.

As the leading information company in the world, Google is converting its entire Internet business to a data platform. Google’s audacious goal of organizing the world’s information means converting more and more of the activity, resources, and systems on the planet into data. Google, more than any company, is driving towards the Vaporized Economy. It has no choice. As Internet usage on mobile devices soar ahead of desktop or laptop computer access to the Web, Google’s core business of desktop search and advertising is in jeopardy.

The company is banking on data, converting its entire collection of Web and mobile properties to a

Một phần của tài liệu Vaporized solid strategies for success in a dematerialized world (Trang 87 - 96)

Tải bản đầy đủ (PDF)

(225 trang)