The challenge for companies of all sizes is how best to integrate the Don Drapers and the data scientists.
Previously, I wrote about the future of marketing being a fusion of the art of storytelling with the specificity of data and the objectivity of analytics. Consumer attention is shifting from TV, print, and radio to digital, which has made tailored, real-time engagement with customers both possible and increasingly necessary. In a world of proliferating channels and constant competition for attention there’s a lot of pressure for brands to get it right.
There’s a sense among many in the tech community that marketing is an industry that’s locked in the past. Within the early-stage startup ecosystem, “marketing” has evolved into “growth hacking.” Growth hacking relies almost exclusively on data and analytics to develop strategies for optimizing customer acquisition, retention, pricing, and more. It’s iterative and rapid; learnings are incorporated into refining both the outreach and the product itself. While the goal is largely the same, the terminology reflects a differentiation between a traditional Don Draper “gut instinct” approach and the new data geek methodology. The marketing industry is well aware of the need to adopt faster feedback loops and nimble campaign strategies; industry conferences feature panels on ramping up digital infrastructure, aligning CMO and CIO goals, and tying spend to outcome. Marketers do recognize the importance of data, but big brands care a great deal about emotional resonance, and the trend toward personalization makes targeted storytelling more complex. Read more…
Technology has changed the way we understand targeting and contextual relevance. How will marketing adapt?
Over the past five years, marketing has transformed from a primarily creative process into an increasingly data-driven discipline with strong technological underpinnings.
The central purpose of marketing hasn’t changed: brands still aim to tell a story, to emotionally connect with a prospective customer, with the goal of selling a product or service. But while the need to tell an interesting, authentic story has remained constant, customers and channels have fundamentally changed. Old Marketing took a spray-and-pray approach aimed at a broad, passive audience: agencies created demographic or psychographic profiles for theoretical consumers and broadcast ads on mass-consumption channels, such as television, print, and radio. “Targeting” was primarily about identifying high concentrations of a given consumer type in a geographic area.
The era of demographics is over. Advances in data mining have enabled marketers to develop highly specific profiles of customers at the individual level, using data drawn from actual personal behavior and consumption patterns. Now when a brand tells a story, it has the ability to tailor the narrative in such a way that each potential customer finds it relevant, personally. Users have become accustomed to this kind of sophisticated targeting; broad-spectrum advertising on the Internet is now essentially spam. At the same time, there is still a fine line between “well-targeted” and “creepy.” Read more…
Micro-patronage could let researchers step around funding obstacles.
In our first science-as-a-service post, I highlighted some of the participants in the ecosystem. In this one, I want to share the changing face of funding.
Throughout the 20th century, most scientific research funding has come from one of two sources: government grants or private corporations. Government funding is often a function of the political and economic climate, so researchers who rely on it risk having to deal with funding cuts and delays. Those who are studying something truly innovative or risky often find it difficult to get funded at all. Corporate research is most often undertaken with an eye toward profit, so projects that are unlikely to produce a return on investment are often ignored or discarded.
If one looks to history, however, scientific research was originally funded by individual inventors and wealthy patrons. These patrons were frequently rewarded with effusive acknowledgements of their contributions; Galileo, for example, named the moons of Jupiter after the Medicis (though the names he chose ultimately did not stick).
There has been a resurgence of that model — though perhaps more democratic — in the modern concept of crowdfunding. Kickstarter, the most well-known of the crowdfunding startups, enables inventors, artists, and makers to source the funds they need for their projects by connecting to patrons on the platform. Contributors donate money to a project and are kept updated on its progress. Eventually, they may receive some sort of reward — a sticker acknowledging their participation or an example of the completed work. Scientists have begun to use the site, in many cases, to supplement their funding. Anyone can be a micro-patron!
What happens when you apply software-as-a-service principles to science?
Software as a service (SaaS) is one of the great innovations of Web 2.0. SaaS enables flexibility and customized solutions. It reduces costs — the cost of entry, the cost of overhead, and as a result, the cost of experimentation. In doing so, it’s been instrumental in spurring innovation.
So, what if you were to apply the principles of SaaS to science? Perhaps we can facilitate scientific progress by streamlining the process. Science as a service (SciAAS?) will enable researchers to save time and money without compromising quality. Making specialized resources and institutional expertise available for hire gives researchers more flexibility. Core facilities that own equipment can rent it out during down time, helping to reduce their own costs. The promise of science as a service is a future in which research is more efficient, creative, and collaborative. Read more…
Putting high-frequency trading into perspective.
Technology is critical to today’s financial markets. It’s also surprisingly controversial. In most industries, increasing technological involvement is progress, not a problem. And yet, people who believe that computers should drive cars suddenly become Luddites when they talk about computers in trading.
There’s widespread public sentiment that technology in finance just screws the “little guy.” Some of that sentiment is due to concern about a few extremely high-profile errors. A lot of it is rooted in generalized mistrust of the entire financial industry. Part of the problem is that media coverage on the issue is depressingly simplistic. Hyperbolic articles about the “rogue robots of Wall Street” insinuate that high-frequency trading (HFT) is evil without saying much else. Very few of those articles explain that HFT is a catchall term that describes a host of different strategies, some of which are extremely beneficial to the public market.
I spent about six years as a trader, using automated systems to make markets and execute arbitrage strategies. From 2004-2011, as our algorithms and technology became more sophisticated, it was increasingly rare for a trader to have to enter a manual order. Even in 2004, “manual” meant instructing an assistant to type the order into a terminal; it was still routed to the exchange by a computer. Automating orders reduced the frequency of human “fat finger” errors. It meant that we could adjust our bids and offers in a stock immediately if the broader market moved, which enabled us to post tighter markets. It allowed us to manage risk more efficiently. More subtly, algorithms also reduced the impact of human biases — especially useful when liquidating a position that had turned out badly. Technology made trading firms like us more profitable, but it also benefited the people on the other sides of those trades. They got tighter spreads and deeper liquidity.