Alexa’s routines are getting a musical upgrade. First launched last year, routines allow Alexa device owners to string together a series of actions that kick off with a simple command — like “good morning” or “I’m home,” for example. Until today, the feature included support for news, weather, traffic, smart home skills, as well as, more recently, a set of “Alexa says” commands that let you add a little personality to a given routine. Starting today, Alexa can play your favorite music, podcast or radio show in a routine, too.
To use the feature, you’ll select an artist, playlist, album or station from your music library or one of the supported streaming services. Currently, the supported services are those that already work with Alexa — Amazon Music, Spotify, Pandora, iHeartRadio, Saavn, Deezer and TuneIn.
Amazon says you’ll also be able to create a volume action to control the audio output on your device.
The addition has the potential to make routines more useful for those who like to have music in their home on a more regular basis. For instance, if you like to start your day with a playlist that gives you energy, you could create a “good morning” routine that turns on your lights and smart coffee maker, then starts playing your favorite upbeat songs.
You also could create a routine for relaxing that includes more soothing music or a nighttime routine that locks the door then plays sleep sounds. A party playlist could be included in a routine that puts your smart light bulbs into a flashing disco mode or crazy colors.
But music isn’t the only option — because some of the services support radio shows or podcasts, those can now be integrated into your routines, too. For instance, a “welcome home” routine could play your daily briefing followed by a podcast or favorite radio show from TuneIn.
Music is an increasingly important business for Amazon, so better integration with Alexa makes sense. The company this week told Billboard it now has “tens of millions” of paid customers, confirming earlier reports that it has become the third-largest music service behind Spotify and Apple Music.
The ability to customize routines with music and other audio content will be available within the Alexa app for iOS and Android. However, the feature is just now beginning to roll out — so you may not see the option immediately.
Any company that’s looking to replace the over 5 billion pounds of ground beef making its way onto tables in the U.S. every year with a meatless substitute is going to need a lot of cash.
It’s a big vision with lots of implications for the world — from climate change and human health to challenging the massive, multi-billion dollar industries that depend on meat — and luckily for Impossible Foods (one of the many companies looking to supplant the meat business globally), the company has managed to attract big name investors with incredibly deep pockets to fund its meatless mission.
In the seven years since the company raised its first $7 million investment from Khosla Ventures, Impossible Foods has managed to amass another $389 million in financing — most recently in the form of a convertible note from the Singaporean global investment powerhouse Temasek (which is backed by the Singaporean government) and the Chinese investment fund Sailing Capital (a state-owned investment fund backed by the Communist Party-owned Chinese financial services firm, Shanghai International Group).
“Part of the reason why we did this as a convertible note, is that we knew we would increase our valuation with the launch of our business,” says David Lee, Impossible Foods chief operating officer. “We closed $114 million in the last 18 months.” The company raised its last equity round of $108 million in September 2015.
Lee declined to comment on the company’s path to profitability, valuation, or revenues.
Impossible began selling its meat substitute back in 2016 with a series of launches at some of America’s fanciest restaurants in conjunction with the country’s most celebrated young chefs.
David Chang (of Momofuku fame in New York) and Traci Des Jardins of Jardiniére and Chris Cosentino of Cockscomb signed on in San Francisco, while Tal Ronnen of Crossroads in Los Angeles were among the first to
“When we launched a year ago, we were producing out of a pilot facility,” says Impossible co-founder Pat Brown. [Now] we have a full-fledged production facility 2.5 million pounds per month at the end of the year.”
The new facility, which opened in Oakland last year has its work cut out for it. Impossible has plans to expand to Asia this year and is now selling its meat in over 1,000 restaurants around the U.S.
Some would argue that the meat substitute has found its legs in the fast casual restaurant chains that now dot the country, serving up mass-marketed, higher price point gourmet burgers. Restaurants including FatBurger, Umami Burger, Hopdoddy, The Counter, Gott’s and B Spot — the Midwest burger restaurant owned by Chef Michael Symon — all hawk Impossible’s meat substitute in an increasing array of combinations.
“When we started looking at what pat and the team at Impossible was doing we saw a perfect fit with the values and mission that Impossible has to drive a stronger mindset around what it is to be conscientious about what is going on,” says Umami Burger chief executive Daniel del Olmo.
Since launching their first burger collaboration last year, Umami Burger has sold over 200,000 Impossible Burgers. “Once people tried the burger they couldn’t believe that it was not meat,” says del Olmo. “They immediately understood that it was a product that they could crave. We are seeing 38% increase in traffic leading to 18% sales growth [since selling the burger].”
At $13 a pop, the Impossible Umami Burger is impossible for most American families to afford, but pursuing the higher end of the market was always the initial goal for Impossible’s founder, Patrick Brown.
A former Stanford University professor and a serial entrepreneur in the organic food space (try his non-dairy yogurts and cheeses!), Brown is taking the same path that Elon Musk used to bring electric vehicles to the market. If higher end customers with discerning palates can buy into meatless burgers that taste like burgers, then the spending can subsidize growth (along with a few hundred million from investors) to create economics that will become more favorable as the company scales up to sell its goods at a lower price point.
Brown recognizes that 2.5 million pounds of meat substitute is no match for a 5 billion pound ground beef juggernaut, but it is, undeniably a start. And as long as the company can boost sales for the companies selling its patties, the future looks pretty bright. ‘”To get to scale you have to sell to a higher price-point,” says Brown.
That approach was the opposite tack from Beyond Meat, perhaps the only other well-funded competitor for the meatless crown. Beyond Meat is selling through grocery stores like Whole Foods in addition to partnerships of its own with chains like TGIFridays and celebrity backers like Leonardo DiCaprio.
“From a brand building standpoint it would have been insane for us to launch in supermakerts given that we had the opportunity to launch with great companies like Umami and great chefs like Dave Chang,” says Brown.
Heme is their best shot
At the heart of the Impossible Food’s meatless revolution is the development of a vegetable-based heme molecule.
Heme is present in most living things and, according to Impossible Foods, it’s the molecule that gives meat its flavor. The company says that it’s the presence of the heme molecule in muscle that makes meat taste like meat. Impossible Foods engineers and ferments yeast to produce that heme protein naturally found in plants, called soy leghemoglobin.
“It’s the iron containing molecule that carries oxygen in the blood… what makes meat red or pink… It’s essential for every living cell on earth,” says Brown. “The thing that we discovered was that pretty much the entire flavor experience of meat that distinguishes it from all over foods is due to heme. Heme transforms fatty acids into the bloody flavored odorant molecules and when you cook meat, the protein that holds the meat at a certain temperature unfolds and lets loose.”
Brown says Impossible Foods can make fish flavors, chicken flavors, and pork flavors already, but is going to stick to ground beef for the foreseeable future.
The next trick for the company is to manipulate the flavor profile of its meat substitute so its burgers can win in blind taste tests against any other combination of meat patty.
“The company’s mission is to completely replace animals in the food system by 2035,” says Brown. “The only way to do it is to do a better job than any animal at producing the most nutritious, delicious, affordable, and versatile foods. And it will be a very interesting proof of concept landmark when we have a burger that is — for flavor and deliciousness — the best burger on earth… that’s going to send a very important signal to the world.”
The global impact
If Impossible Foods, Beyond Meat or any of their competitors that are working on developing cultured meat cells in a lab are successful it has huge implications for the world.
These lab-grown meats and meat substitutes could use up to 75% less water, generate 87% fewer greenhouse gases and require 95% less land than what’s used for meat production.
Those statistics have attracted investors like the Open Philanthropy Project, Temasek, Bill Gates and Horizons Ventures (backed by the Hong Kong billionaire Li Ka Shing). Those billionaire backers have invested in multiple rounds of funding for the company alongside other early financiers including Google Ventures, UBS and Viking Global Investors.
The fundamental economics are so much more favorable for us than for the cow,” says Brown.
Those economics could also be compelling for potential meat production partners, he says. Brown envisions a potential future where production facilities that use fermentation processes could be used to manufacture the company’s ingredients to get to scale. “In order to scale rapidly we didn’t want to have to build the entire supply chain from the farm up.”
Given that the main ingredients are wheat, potato, and the manufactured heme protein, there’s a chance that the company could actually create an alternative supply chain to the meat packers, butchers, and slaughterhouses that dominate the landscape.
The meat industry has taken notice and is beginning to push back.
According to a report in USA Today, the U.S. Cattlemen’s association filed a 15-page petition with the U.S. Department of Agriculture earlier this year calling for an official definition of the terms “beef” and “meat”.
“While at this time alternative protein sources are not a direct threat to the beef industry, we do see improper labeling of these products as misleading,” said Lia Biondo, the association’s policy and outreach director, in a statement. “Our goal is to head off the problem before it becomes a larger issue.”
For Brown, it’s another step along the road of how humans sustain themselves. “People act as if science and technology have been outside of the food system,” he says. “The whole food system is a combination of nature and science that makes the food that we eat come into being.”
Facebook is showing an unprecedented level of transparency around its latest effort to suspend Russian trolls trying to influence elections and mislead the public as it tries to regain the trust of users and the government. The company shared both stats about the account deletions and samples of the content they shared.
Facebook has removed 70 Facebook accounts, 138 Facebook Pages, and 65 Instagram accounts run by the Russian government-connected troll farm and election interference squad the Internet Research Agency. Facebook chief security officer Alex Stamos cited the IRA’s use of “inauthentic accounts to deceive and manipulate people” as “why we don’t want them on Facebook. We removed this latest set of Pages and accounts solely because they were controlled by the IRA — not based on the content.”
95 percent of the accounts operated in Russian and targeted Russia or Russian-speakers in nearby countries including Azerbaijan, Uzbekistan and Ukraine. 1.08 million users followed at least one of the Facebook Pages, and 493,000 users followed at least one of the Instagram accounts. The accounts had spent a combined $167,000 on ads since the start of 2015.
Facebook CEO Mark Zuckerberg wrote that since discovering the IRA’s election interference efforts, “we have improved our techniques to prevent nation states from interfering in foreign elections, and we’ve built more advanced AI tools to remove fake accounts more generally.” He went on to detail how Facebook is half-way to its promise to double its security and content review staff from 10,000 to 20,000 this year, with 15,000 now working on the efforts at Facebook.
“These efforts have all made it harder for nation states to interfere in foreign elections” Zuckerberg wrote on Facebook. “With today’s update, we have now identified a large network the IRA is using to manipulate people in Russia itself. This is the next step towards removing them from Facebook entirely.”
By detailing the specifics of its efforts rather than dragging its feet or waiting for government inquiries, Facebook may be able convince people it’s not asleep at the wheel of its social network.
Facebook at first said just 10 million users had seen ads bought by the IRA, but later explained that when organic un-paid posts were counted, 126 million people had seen the propaganda group’s Facebook posts and another 20 million had seen its Instagram posts. Facebook previously shut down the IRA’s 170 Instagram accounts that had shared 120,000 pieces of propaganda and 120 Facebook Page that had shared 80,000 pieces of content.
The trickle of information and initial low-ball numbers made it seem like Facebook was trying to downplay the severity of platform abuse. But in recent weeks since the Cambridge Analytica scandal broke, Facebook has seemed increasingly transparent and receptive to criticism. It seems the convergence of bad news has truly shaken Facebook awake.
Reports of an active shooter at YouTube’s San Bruno, CA headquarters have been reported on social media. Local news station KRON is reporting a number of 911 calls in the area related to the event. Local police have also issued a warning for bystanders to stay out of the area.
It appears as though emergency teams have begun an evacuation of the facility, including YouTube Live product team member, Vadim Lavrusik, who was among the first to report the incident on Twitter.
Active shooter at YouTube HQ. Heard shots and saw people running while at my desk. Now barricaded inside a room with coworkers.
The video service’s Bay Area headquarters houses more than 1,100 employees, according to Google. TechCrunch has confirmed with San Bruno Police that there is an on-going incident in the area, but the department has yet to issue an official statement. They have, however, confirmed with The Hollywood Report that the situation constitutes a “very active scene.”
Snap Maps photos from the area confirm that a large scale evacuation appears to be underway at the office building. In addition to the images of multiple police cars, eye witness reports have also noted the presence of the bomb squad and fire trucks at the scene.
Google has thus far only confirmed that it is “coordinating with authorities,” promising further updates as information becomes available.
We’ve reached out to Google for more information. We will update this story as we learn more.
Spotify is done with its long-awaited “direct listing” experiment. The music streaming company went public without the IPO.
After completing its first trade halfway through the day at $165.90, Spotify fell to $149.01, 10% beneath the open. It was a down day on the stock market, but at a $26.5 billion market cap, it’s up from the private market trading that happened in the months leading up to the IPO.
The top end of that range, $132, was used as a “reference point,” valuing the company at $23.5 billion. Since there was no IPO price, that demarcation is being used to say that Spotify traded up about 13% on its first day.
Yet while it achieved a desirable market cap, some on Wall Street are puzzled as to why Spotify would want to go public without raising money.
One myth that’s been floating around is that Spotify did this to avoid paying bankers. In fact, they worked with Morgan Stanley, Goldman Sachs and Allen & Co. in the lead up to the debut.
They did not eliminate the investment banks, but they did manage to avoid the dreaded “lock-up” expiration, which is when most employees and insiders are allowed to sell shares. This is usually about six months after an IPO.
Some are wondering if Spotify’s debut will be replicated in the future.
“The direct listing is really interesting as a potential roadmap for future companies because the price that Spotify now trades it as a real price without any of the distortions which come from a lockup or a banker-managed process,” said Chi-Hua Chen, managing partner at Goodwater Capital. Chen invested in Spotify when he was at Kleiner Perkins. He believes that “the price is as real an expression of the value of the company as possible, which makes it an interesting case study for future companies moving into the public markets.”
Apart from the change in process, this debut also felt different from IPOs because there was no celebration. There was no bell-ringing and no Spotify employees cheering from the floor.
Outside the New York Stock Exchange, there was a Spotify banner to commemorate the event. And next to it, there was a Swiss flag meant to honor them. The only problem is, Spotify is Swedish.
Spotify is done with its long-awaited “direct listing” experiment. The music streaming company went public without the IPO.
After completing its first trade halfway through the day at $165.90, Spotify fell to $149.01, 10% beneath the open. It was a down day on the stock market, but at a $26.5 billion market cap, it’s up from the private market trading that happened in the months leading up to the IPO.
The top end of that range, $132, was used as a “reference point,” valuing the company at $23.5 billion. Since there was no IPO price, that demarcation is being used to say that Spotify traded up about 13% on its first day.
Yet while it achieved a desirable market cap, some on Wall Street are puzzled as to why Spotify would want to go public without raising money.
One myth that’s been floating around is that Spotify did this to avoid paying bankers. In fact, they worked with Morgan Stanley, Goldman Sachs and Allen & Co. in the lead up to the debut.
They did not eliminate the investment banks, but they did manage to avoid the dreaded “lock-up” expiration, which is when most employees and insiders are allowed to sell shares. This is usually about six months after an IPO.
Some are wondering if Spotify’s debut will be replicated in the future.
“The direct listing is really interesting as a potential roadmap for future companies because the price that Spotify now trades it as a real price without any of the distortions which come from a lockup or a banker-managed process,” said Chi-Hua Chen, managing partner at Goodwater Capital. Chen invested in Spotify when he was at Kleiner Perkins. He believes that “the price is as real an expression of the value of the company as possible, which makes it an interesting case study for future companies moving into the public markets.”
Apart from the change in process, this debut also felt different from IPOs because there was no celebration. There was no bell-ringing and no Spotify employees cheering from the floor.
Outside the New York Stock Exchange, there was a Spotify banner to commemorate the event. And next to it, there was a Swiss flag meant to honor them. The only problem is, Spotify is Swedish.
If you’re running a software company today, it’s almost a foregone conclusion that most or all of your apps will run in the cloud. Likely Amazon or Google’s. It’s hard to imagine that this wasn’t always the case, but there are still some late adopters migrating their own physical data centers into managed ones. And, as with all trends in technology, this too shall pass. Just when you were getting comfortable with containers and auto-scaling, a new architecture emerges, swinging the pendulum back to a truly distributed world.
What’s wrong with the cloud?
A typical self-driving car generates up to 100MB of data per second from a combination of cameras, LIDARs, accelerometers and on-board computers. That data needs to be processed nearly instantly to keep the car on the road. With so much data to sift through, the current generation of cellular networks can’t keep up. By the time data arrives in the cloud, it will be too late. Instead, data needs to be processed as close to the sensors as possible, directly at the edge of networks, on the cars themselves.
Most of us aren’t building or riding in self-driving cars (yet), but there’s a good chance we’re already interacting with edge computing every day. Neural networks in smart speakers in almost 40 million American homes are listening for words like “Alex,” “Siri” or “Google” and, according to Statista, 3 billion Snapchats are scanned for faces each day in order to add the addicting face filters. By the end of the year, 20 percent of smartphones globally will have hardware-accelerated machine learning capabilities.
How did we get here?
All of these apps and devices are made possible by two major trends: advances in deep learning algorithms that help computers see, hear and understand and the proliferation of specialized processors like GPUs and TPUs that can run these algorithms efficiently, even in mobile environments.
Neural networks and deep learning aren’t new. In fact, the first artificial neural networks were created in the 1950s, and there have been multiple false starts since.This time, though, the abundance of labeled training data and compute power made it feasible to train these large models. Though AI research is still proceeding at a breakneck pace, fields like computer vision are starting to mature. Developers can choose from a variety of standardized model architectures, publicly available training data sets and tools. You no longer need a PhD just to get started. Technology is being democratized.
Tools and hardware are improving so quickly it’s hard to keep up.
Hardware is catching up, fast. Machine learning algorithms like neural networks are really just long sequences of matrix multiplications. Specialized processors like GPUs and newer neural processing units like those in Apple’s A11 Bionic chip and Google’s Tensor Processing Unit (TPU) are optimized for exactly these mathematical operations, offering 10-100x speedups over traditional CPUs while using less power overall. As major chip manufacturers roll out mobile-ready machine learning accelerators, every device will soon have the power to run the latest AI models.
The edge = new opportunity
Big data, data science, machine learning and now deep learning have been slowly weaving their way into products and companies for the past decade. Most of the time, this happened behind the scenes, up in the cloud. Data warehouses and analytics pipelines process records en masse. Results are made accessible to end users through APIs and database queries. That’s not going away, but the edge presents a new opportunity to use the predictive capabilities of machine learning models more quickly.
Now, the algorithms move to the data. Information is processed in real time, as soon as it’s captured by the sensor, and results are available immediately. In this latency-free world, entirely new user experiences are possible. Your phone’s screen becomes a portal to a world of augmented reality. Products can be personalized for a single user while private data never leaves the device. Applications become ambient and frictionless, anticipating questions and answering them before you ask.
It doesn’t take a PhD
When done right, experiences made with AI and edge computing feel like magic, but building them is incredibly complex. There is a divide between the tech stacks used to train and deploy machine learning models in the cloud and the ones used to build applications for edge devices, like smartphones and IoT. Neural networks can replace thousands of lines of procedural code, but fail in unexpected, silent ways and need to be tested differently. Performance issues that can be solved by simply adding more compute or memory from a near infinite cloud call for specialized optimization when they occur out on edge devices we don’t control. Even the programming languages preferred by the cloud are different than those running applications on mobile devices.
This is starting to change. Tools and hardware are improving so quickly it’s hard to keep up. Heavyweights like Apple and Google have made mobile machine learning frameworks (Core ML and TensorFlow Lite, respectively) centerpieces of their latest developer offerings. More export options and better interoperability are being added to tools like AWS’s SageMaker, Azure’s ML Studio and IBM’s Watson Studio weekly.
It’s time to start thinking about ways you can improve your applications by leveraging machine learning and edge computing. It doesn’t take a PhD in AI or years of experience to get started anymore — and if you don’t act quickly, you risk getting left behind.
If you’re running a software company today, it’s almost a foregone conclusion that most or all of your apps will run in the cloud. Likely Amazon or Google’s. It’s hard to imagine that this wasn’t always the case, but there are still some late adopters migrating their own physical data centers into managed ones. And, as with all trends in technology, this too shall pass. Just when you were getting comfortable with containers and auto-scaling, a new architecture emerges, swinging the pendulum back to a truly distributed world.
What’s wrong with the cloud?
A typical self-driving car generates up to 100MB of data per second from a combination of cameras, LIDARs, accelerometers and on-board computers. That data needs to be processed nearly instantly to keep the car on the road. With so much data to sift through, the current generation of cellular networks can’t keep up. By the time data arrives in the cloud, it will be too late. Instead, data needs to be processed as close to the sensors as possible, directly at the edge of networks, on the cars themselves.
Most of us aren’t building or riding in self-driving cars (yet), but there’s a good chance we’re already interacting with edge computing every day. Neural networks in smart speakers in almost 40 million American homes are listening for words like “Alex,” “Siri” or “Google” and, according to Statista, 3 billion Snapchats are scanned for faces each day in order to add the addicting face filters. By the end of the year, 20 percent of smartphones globally will have hardware-accelerated machine learning capabilities.
How did we get here?
All of these apps and devices are made possible by two major trends: advances in deep learning algorithms that help computers see, hear and understand and the proliferation of specialized processors like GPUs and TPUs that can run these algorithms efficiently, even in mobile environments.
Neural networks and deep learning aren’t new. In fact, the first artificial neural networks were created in the 1950s, and there have been multiple false starts since.This time, though, the abundance of labeled training data and compute power made it feasible to train these large models. Though AI research is still proceeding at a breakneck pace, fields like computer vision are starting to mature. Developers can choose from a variety of standardized model architectures, publicly available training data sets and tools. You no longer need a PhD just to get started. Technology is being democratized.
Tools and hardware are improving so quickly it’s hard to keep up.
Hardware is catching up, fast. Machine learning algorithms like neural networks are really just long sequences of matrix multiplications. Specialized processors like GPUs and newer neural processing units like those in Apple’s A11 Bionic chip and Google’s Tensor Processing Unit (TPU) are optimized for exactly these mathematical operations, offering 10-100x speedups over traditional CPUs while using less power overall. As major chip manufacturers roll out mobile-ready machine learning accelerators, every device will soon have the power to run the latest AI models.
The edge = new opportunity
Big data, data science, machine learning and now deep learning have been slowly weaving their way into products and companies for the past decade. Most of the time, this happened behind the scenes, up in the cloud. Data warehouses and analytics pipelines process records en masse. Results are made accessible to end users through APIs and database queries. That’s not going away, but the edge presents a new opportunity to use the predictive capabilities of machine learning models more quickly.
Now, the algorithms move to the data. Information is processed in real time, as soon as it’s captured by the sensor, and results are available immediately. In this latency-free world, entirely new user experiences are possible. Your phone’s screen becomes a portal to a world of augmented reality. Products can be personalized for a single user while private data never leaves the device. Applications become ambient and frictionless, anticipating questions and answering them before you ask.
It doesn’t take a PhD
When done right, experiences made with AI and edge computing feel like magic, but building them is incredibly complex. There is a divide between the tech stacks used to train and deploy machine learning models in the cloud and the ones used to build applications for edge devices, like smartphones and IoT. Neural networks can replace thousands of lines of procedural code, but fail in unexpected, silent ways and need to be tested differently. Performance issues that can be solved by simply adding more compute or memory from a near infinite cloud call for specialized optimization when they occur out on edge devices we don’t control. Even the programming languages preferred by the cloud are different than those running applications on mobile devices.
This is starting to change. Tools and hardware are improving so quickly it’s hard to keep up. Heavyweights like Apple and Google have made mobile machine learning frameworks (Core ML and TensorFlow Lite, respectively) centerpieces of their latest developer offerings. More export options and better interoperability are being added to tools like AWS’s SageMaker, Azure’s ML Studio and IBM’s Watson Studio weekly.
It’s time to start thinking about ways you can improve your applications by leveraging machine learning and edge computing. It doesn’t take a PhD in AI or years of experience to get started anymore — and if you don’t act quickly, you risk getting left behind.
For those who live under a rock (which, these days, I would recommend), President Donald Trump has become increasingly belligerent towards Amazon and its founder, Jeff Bezos.
In addition to a sequence of tweets against the ecommerce and cloud giant , Gabriel Sherman reported in Vanity Fair yesterday that “Now, according to four sources close to the White House, Trump is discussing ways to escalate his Twitter attacks on Amazon to further damage the company. ‘He’s off the hook on this. It’s war,’ one source told me. ‘He gets obsessed with something, and now he’s obsessed with Bezos,’ said another source. ‘Trump is like, how can I fuck with him?’”
‘How can I fuck with them?’ could also describe America’s backwards approach to its flagging prowess in critical technology fields, policies that stand in stark contrast to the massive and focused investment of strategic adversaries like China.
China selects, grows, and champions a set of winners in each industry in order to concentrate resources and increase the probability of success globally for its chosen companies. As Antonio Graceffo described in Foreign Policy Journal, “National champions are companies which help further the government’s strategic aims and in return, the government supports these companies by providing easier access to financing, giving preference in government contract bidding, and sometimes oligarchy or monopoly status in protected industries, giving these companies a number of advantages over their competitors.”
There are good reasons to be concerned about market dominance — it limits consumer choice and often rises prices. However, there are obvious limitations on how many competitors can enter markets like wireless infrastructure and cloud computing. The upfront costs are exorbitant — just launching a single data center today can easily cost hundreds of millions of dollars or more, and conducting original R&D in a competitive industry like artificial intelligence is equally expensive when a machine learning expert can go for tens of millions of dollars.
We are never going to have five Googles, nor five Dropboxes or five Amazons — the economics in these markets just don’t work that way. Their scale is what allows them to offer such comprehensive services at such low cost to consumers. Knocking out Apple is really opening the American market to the next four smartphone manufacturers, which would be Asian manufacturers like Samsung, Huawei, Lenovo, and Xiaomi. That sounds like a pyrrhic victory to me.
The U.S. believes in the power of free markets to cull losers and ensure winners a fair return, and the government avoids picking “winners” as a matter of course in its industrial policy. That worked great when the American economy was dominant, but it is no longer tenable in a world where strategic adversaries are putting their full weight behind a handful of companies.
So instead of getting on The Twitter and blasting Amazon, maybe this administration should start to consider that Amazon’s size and dominance in ecommerce and cloud services is actually an incredible blessing of American capitalism. Maybe it should start to think about how the govenrmetn could assist Amazon in capturing more overseas markets, ensuring that the wealth generated by the company continues to return to its home country.
The threats faced by American tech companies parallel similar fears of the 1980s, when Japan’s resurgence on the world stage captivated the attention of U.S. politicians. China though is nearly eleven times the population of Japan, and has already overtaken the U.S. economy by some measures. This time really is very different, and the free market needs defenders. Ironically, that means backing American tech giants globally against their competitors.
For those who live under a rock (which, these days, I would recommend), President Donald Trump has become increasingly belligerent towards Amazon and its founder, Jeff Bezos.
In addition to a sequence of tweets against the ecommerce and cloud giant , Gabriel Sherman reported in Vanity Fair yesterday that “Now, according to four sources close to the White House, Trump is discussing ways to escalate his Twitter attacks on Amazon to further damage the company. ‘He’s off the hook on this. It’s war,’ one source told me. ‘He gets obsessed with something, and now he’s obsessed with Bezos,’ said another source. ‘Trump is like, how can I fuck with him?’”
‘How can I fuck with them?’ could also describe America’s backwards approach to its flagging prowess in critical technology fields, policies that stand in stark contrast to the massive and focused investment of strategic adversaries like China.
China selects, grows, and champions a set of winners in each industry in order to concentrate resources and increase the probability of success globally for its chosen companies. As Antonio Graceffo described in Foreign Policy Journal, “National champions are companies which help further the government’s strategic aims and in return, the government supports these companies by providing easier access to financing, giving preference in government contract bidding, and sometimes oligarchy or monopoly status in protected industries, giving these companies a number of advantages over their competitors.”
There are good reasons to be concerned about market dominance — it limits consumer choice and often rises prices. However, there are obvious limitations on how many competitors can enter markets like wireless infrastructure and cloud computing. The upfront costs are exorbitant — just launching a single data center today can easily cost hundreds of millions of dollars or more, and conducting original R&D in a competitive industry like artificial intelligence is equally expensive when a machine learning expert can go for tens of millions of dollars.
We are never going to have five Googles, nor five Dropboxes or five Amazons — the economics in these markets just don’t work that way. Their scale is what allows them to offer such comprehensive services at such low cost to consumers. Knocking out Apple is really opening the American market to the next four smartphone manufacturers, which would be Asian manufacturers like Samsung, Huawei, Lenovo, and Xiaomi. That sounds like a pyrrhic victory to me.
The U.S. believes in the power of free markets to cull losers and ensure winners a fair return, and the government avoids picking “winners” as a matter of course in its industrial policy. That worked great when the American economy was dominant, but it is no longer tenable in a world where strategic adversaries are putting their full weight behind a handful of companies.
So instead of getting on The Twitter and blasting Amazon, maybe this administration should start to consider that Amazon’s size and dominance in ecommerce and cloud services is actually an incredible blessing of American capitalism. Maybe it should start to think about how the govenrmetn could assist Amazon in capturing more overseas markets, ensuring that the wealth generated by the company continues to return to its home country.
The threats faced by American tech companies parallel similar fears of the 1980s, when Japan’s resurgence on the world stage captivated the attention of U.S. politicians. China though is nearly eleven times the population of Japan, and has already overtaken the U.S. economy by some measures. This time really is very different, and the free market needs defenders. Ironically, that means backing American tech giants globally against their competitors.