Climate risk including extreme events and the related pressures our environment are fundamentally affecting the way business and governments operate – both tactically and strategically. Increasing climate volatility is causing food supply disruptions, and increasing pressure on Enterprises (including financial institutions, insurers, producers) to disclose what’s going on.
The trouble is, while there is a lot of data about all this, its complexity, incompleteness and sheer volume is too vast for humans to process with the tools available today. So just as the climate changes, we are faced with ‘data chaos’. Equally, other parts of the world suffer from data scarcity, making it much harder to provide useful and timely analysis.
So the challenge is to address these issues simultaneously. So a new startup, Cervest, has created an AI-driven platform designed to inform the decision-making capabilities of businesses, governments and growers in the face of increasing climate volatility.
Cervest, has now closed a £3.7m investment round to fund the launch of its real-time, climate forecasting platform.
The round was led by deep-tech investor Future Positive Capital, with co-investor Astanor Ventures. The seed-stage funding round brings the company’s total funding to more than £4.5m.
Built on three years of research and development by a team of scientists, mathematicians, developers and engineers, Cervest says its Earth Science AI platform can analyze billions of data points to forecast how changes in the climate will impact the future of entire countries right down to individual landscapes.
It does this by combining research and modeling techniques taken from proven Earth sciences – including atmospheric science, meteorology, hydrology and agronomy – with artificial intelligence, imaging, machine learning and Bayesian statistics.
Using large collections of satellite imagery and probability theory, the platform can identify signals, or early-warning signs, of extreme events such as floods, fires, and strong winds. It can also spot changes in soil health, and identify water risk.
Cervest says the platform could do such things as reveal to a multinational the optimum location to build a new factory; warn a wheat grower that their crop yield isn’t expected to meet its targets; or used by insurers to help them set premiums for the next 12 months.
The team comes from a network of more than 30 universities, including Imperial College, The Alan Turing Institute, Cambridge, UCL, Harvard and Oxford, and has published more than 60 peer-reviewed scientific papers.
A beta version of the platform is due to launch in Q1 2020.
Iggy Bassi, Founder & CEO, Cervest said: “Our goal is to empower everyone to make informed decisions that improve the long-term resilience of our planet. Today decision-makers are struggling with climate uncertainty and extreme events and how they are affecting their business operations, assets, investments, or policy choices.”
Sofia Hmich, Founder, Future Positive Capital said: “With reports suggesting we have fewer than 60 years of farming left unless drastic action is taken, the need for science-backed decisions could not be greater. Businesses and policymakers hold the key to change and with access to Cervest’s proprietary AI technology they can start to make that change a reality at low cost – before it’s too late.”
Bassi previously ran the impact-led agribusiness, GADCO, which was supported by Acumen Fund, Soros, Gates Foundation, World Bank, and Syngenta. Its impact featured in UNDP, World Economic Forum, FT, Guardian and Huff Post. He previously built a software company focused on data analytics.
Cervest was inspired by Bassi’s experience building a farm-to-market agribusiness whilst confronting first-hand the impacts of climate and natural resource volatilities.
The Cervest team includes 8 scientists and 4 PhDs. Between them, they have published more than 60 peer-reviewed scientific papers with more than 3000 citations in high-profile titles including Nature, Proceedings of the National Academy of Sciences and The Royal Statistical Society.
MIT’s Computer Science and Artificial Intelligence Laboratory has come up with a clever way for its small cube-like robots, which can move on their own, to communicate and coordinate with one another for self-assembly. The behavior is described by MIT researchers as somewhat ‘hive-like,’ and in the video above you can see what they mean by that.
These cube bots can roll across the ground, navigate up and across each other, and even jump short distances. And thanks to recent improvements made by the team working on the project, they can also communicate in a basic way using unique barcode identifiers on the faces of the blocks to allow them to identify one another. These 16 blocks can now use their communication system and their ability to move themselves around to perform tasks including producing various shapes, or even following arrows or light signals.
Their current abilities are pretty limited, but the researchers envision a time when a larger and more advanced version of this system could be use to deploy efficiently self-assembling bots that can create structures like bridges, ramps or even staircases for use in disaster response or rescue scenarios. Of course, they also theorize these things might be pretty attractive for more mundane applications like gaming, too.
MIT’s Computer Science and Artificial Intelligence Laboratory has come up with a clever way for its small cube-like robots, which can move on their own, to communicate and coordinate with one another for self-assembly. The behavior is described by MIT researchers as somewhat ‘hive-like,’ and in the video above you can see what they mean by that.
These cube bots can roll across the ground, navigate up and across each other, and even jump short distances. And thanks to recent improvements made by the team working on the project, they can also communicate in a basic way using unique barcode identifiers on the faces of the blocks to allow them to identify one another. These 16 blocks can now use their communication system and their ability to move themselves around to perform tasks including producing various shapes, or even following arrows or light signals.
Their current abilities are pretty limited, but the researchers envision a time when a larger and more advanced version of this system could be use to deploy efficiently self-assembling bots that can create structures like bridges, ramps or even staircases for use in disaster response or rescue scenarios. Of course, they also theorize these things might be pretty attractive for more mundane applications like gaming, too.
Perhaps best known for a career-making seed investment in Snapchat, Lightspeed partner Jeremy Liew is a leading investor across media and entertainment, making bets on startups like Cheddar, Giphy, HQ, SpecialGuest, Mic, Beme, Playdom, Duta and Flixster.
I spoke to him earlier this week about how he assesses the market for media startups, which led into a discussion about “always-on” forms of entertainment that add stimulation to a person’s environment, instead of commanding their full focus.
Here’s the transcript of our conversation, edited for length and clarity:
Eric Peckham: Do you have a consistent framework for evaluating potential investments?
Jeremy Liew: Our perspective is that consumer technology is now more about the consumer side than the technology side. It’s really more about pop culture than new innovations in technology.
When we are assessing a consumer investment we ask ourselves, “does this have the potential to become part of pop culture?” One way to think about it is whether people who don’t use the product will still become familiar with what it is. Like how you can understand a reference to “Game of Thrones” even if you don’t watch it.
Another key question is, whether there is a scalable, repeatable way for the product to reach its audience. That can be advertising, it can be word of mouth, it could be through social channels.
We also asked ourselves, “is this product going to build a new habit?” and we assess whether the entrepreneur has a unique insight into both why this is happening and why it’s happening now.
Your colleague Alex Taussig told me you have an overarching “future of TV” thesis that’s guided a number of your investments. Tell me about that thesis and how it filters opportunities in the media & entertainment space for you.
I think you can split what used to be called TV into two core use cases: “TV as entertainment” and “TV as company.”
“TV as entertainment” is most of what Netflix, Amazon, Apple, HBO, and similar companies have been focused on. It is high-production quality entertainment you have to pay attention to. Think shows like “Game of Thrones,” “Succession,” “Orange is the New Black.”
Then there’s another classic category of TV — “TV as company,” which is stuff that’s on while you’re doing something else. You’ve got the morning show on while you’re getting the kids ready for school or you’re getting ready to go to work. That’s how you get the five hours of TV viewing per day that Americans average.
TV as entertainment has to be so good that you choose to watch it over doing anything else; TV as company you just have to not choose to turn it off.
The vast amount of attention to the move to video — with subscription video on-demand (SVOD) and so forth — has been on TV as entertainment. There are hit shows that will attract people to Netflix, or to HBO Go, to Disney+. But what causes them to stay as a subscriber after they binge-watched all the way through the stuff that brought them in the first place?
That tends to be the TV as company content. If you actually look at hours watched in television, no one is tuning in to catch the latest episode of “Shark Week” — it is just what’s on. Think about the TV Guide grid: every genre, every channel will likely have a mobile native equivalent.
Some of these already exist. ESPN — it’s a channel where men watch the best competitors in the world play the sports they used to play when they were in high school and then they talk about it with their friends. Twitch is a place where men, mostly, watch the best competitors in the world play the games they used to play when they were younger and talk about it with their friends.
Perhaps best known for a career-making seed investment in Snapchat, Lightspeed partner Jeremy Liew is a leading investor across media and entertainment, making bets on startups like Cheddar, Giphy, HQ, SpecialGuest, Mic, Beme, Playdom, Duta and Flixster.
I spoke to him earlier this week about how he assesses the market for media startups, which led into a discussion about “always-on” forms of entertainment that add stimulation to a person’s environment, instead of commanding their full focus.
Here’s the transcript of our conversation, edited for length and clarity:
Eric Peckham: Do you have a consistent framework for evaluating potential investments?
Jeremy Liew: Our perspective is that consumer technology is now more about the consumer side than the technology side. It’s really more about pop culture than new innovations in technology.
When we are assessing a consumer investment we ask ourselves, “does this have the potential to become part of pop culture?” One way to think about it is whether people who don’t use the product will still become familiar with what it is. Like how you can understand a reference to “Game of Thrones” even if you don’t watch it.
Another key question is, whether there is a scalable, repeatable way for the product to reach its audience. That can be advertising, it can be word of mouth, it could be through social channels.
We also asked ourselves, “is this product going to build a new habit?” and we assess whether the entrepreneur has a unique insight into both why this is happening and why it’s happening now.
Your colleague Alex Taussig told me you have an overarching “future of TV” thesis that’s guided a number of your investments. Tell me about that thesis and how it filters opportunities in the media & entertainment space for you.
I think you can split what used to be called TV into two core use cases: “TV as entertainment” and “TV as company.”
“TV as entertainment” is most of what Netflix, Amazon, Apple, HBO, and similar companies have been focused on. It is high-production quality entertainment you have to pay attention to. Think shows like “Game of Thrones,” “Succession,” “Orange is the New Black.”
Then there’s another classic category of TV — “TV as company,” which is stuff that’s on while you’re doing something else. You’ve got the morning show on while you’re getting the kids ready for school or you’re getting ready to go to work. That’s how you get the five hours of TV viewing per day that Americans average.
TV as entertainment has to be so good that you choose to watch it over doing anything else; TV as company you just have to not choose to turn it off.
The vast amount of attention to the move to video — with subscription video on-demand (SVOD) and so forth — has been on TV as entertainment. There are hit shows that will attract people to Netflix, or to HBO Go, to Disney+. But what causes them to stay as a subscriber after they binge-watched all the way through the stuff that brought them in the first place?
That tends to be the TV as company content. If you actually look at hours watched in television, no one is tuning in to catch the latest episode of “Shark Week” — it is just what’s on. Think about the TV Guide grid: every genre, every channel will likely have a mobile native equivalent.
Some of these already exist. ESPN — it’s a channel where men watch the best competitors in the world play the sports they used to play when they were in high school and then they talk about it with their friends. Twitch is a place where men, mostly, watch the best competitors in the world play the games they used to play when they were younger and talk about it with their friends.
We’ve known for a while now that Google was bringing the “Incognito mode” concept to Maps, allowing you to run searches and find routes without them automatically being tied to your account history.
If you’ve been digging around trying to find the option without any luck, you weren’t just missing it. Though first mentioned back in May at Google I/O, the company says the rollout is just now officially underway.
It’s a staged rollout, so don’t be surprised if you don’t see the new feature immediately even if you’re on the latest version of maps. It’s rolling out in batches, beginning with Android users. Google says it should be available to all Android users in “the next few days.”
Once it’s enabled on your account, you can toggle incognito mode on/off by tapping your profile picture then flipping the switch. Here’s what that looks like:
So why incognito mode? As we wrote back in May: whether its the holiday season and you’re trying to keep your gift hunting locations under wraps, or you’re visiting a doctor and would just prefer it not pop up the next time a friend grabs your phone for some quick directions, there are all sorts of reasons you might want to leave fewer breadcrumbs. Remember, though, that while it’s less visibly tied to you, it’s still all stored in ways behind the scenes on Google’s end; the company told Wired earlier this month that while Incognito sessions aren’t tied to an account, they are logged with a unique session identifier that gets reset between sessions.
Ed Niedermeyer is an author, columnist and co-host of The Autonocast. His book, Ludicrous: The Unvarnished Story of Tesla Motors, was released in August 2019.
“Congrats! This car is all yours, with no one up front,” the pop-up notification from the Waymo One app reads. “This ride will be different. With no one else in the car, Waymo will do all the driving. Enjoy this free ride on us!”
Moments later, an empty Chrysler Pacifica minivan appears and navigates its way to my location near a park in Chandler, the Phoenix suburb where Waymo has been testing its autonomous vehicles since 2016.
Waymo, the Google self-driving-project-turned-Alphabet unit, has given demos of its autonomous vehicles before. More than a dozen journalists experienced driverless rides in 2017 on a closed course at Waymo’s testing facility in Castle; and Steve Mahan, who is legally blind, took a driverless ride in the company’s Firefly prototype on Austin’s city streets way back in 2015.
But this driverless ride is different — and not just because it involved an unprotected left-hand turn, busy city streets or that the Waymo One app was used to hail the ride. It marks the beginning of a driverless ride-hailing service that is now being used by members of its early rider program and eventually the public.
It’s a milestone that has been promised — and has remained just out of reach — for years.
Nearly two years after Krafcik’s comments, vehicles driven by humans — not computers — still clog the roads in Phoenix. The majority of Waymo’s fleet of self-driving Chrysler Pacifica minivans in Arizona have human safety drivers behind the wheel; and the few driverless ones have been limited to testing only.
Despite some progress, Waymo’s promise of a driverless future has seemed destined to be forever overshadowed by stagnation. Until now.
Waymo wouldn’t share specific numbers on just how many driverless rides it would be giving, only saying that it continues to ramp up its operations. Here’s what we do know. There are hundreds of customers in its early rider program, all of whom will have access to this offering. These early riders can’t request a fully driverless ride. Instead, they are matched with a driverless car if it’s nearby.
There are, of course, caveats to this milestone. Waymo is conducting these “completely driverless” rides in a controlled geofenced environment. Early rider program members are people who are selected based on what ZIP code they live in and are required to sign NDAs. And the rides are free, at least for now.
Still, as I buckle my seatbelt and take stock of the empty driver’s seat, it’s hard not to be struck, at least for a fleeting moment, by the achievement.
It would be a mistake to think that the job is done. This moment marks the start of another, potentially lengthy, chapter in the development of driverless mobility rather than a sign that ubiquitous autonomy is finally at hand.
Futuristic joyride
A driverless ride sounds like a futuristic joyride, but it’s obvious from the outset that the absence of a human touch presents a wealth of practical and psychological challenges.
As soon as I’m seated, belted and underway, the car automatically calls Waymo’s rider assistance team to address any questions or concerns about the driverless ride — bringing a brief human touch to the experience.
I’ve been riding in autonomous vehicles on public roads since late 2016. All of those rides had human safety drivers behind the wheel. Seeing an empty driver’s seat at 45 miles per hour, or a steering wheel spinning in empty space as it navigates suburban traffic, feels inescapably surreal. The sensation is akin to one of those dreams where everything is the picture of normalcy except for that one detail — the clock with a human face or the cat dressed in boots and walking with a cane.
Other than that niggling feeling that I might wake up at any moment, my 10-minute ride from a park to a coffee shop was very much like any other ride in a “self-driving” car. There were moments where the self-driving system’s driving impressed, like the way it caught an unprotected left turn just as the traffic signal turned yellow or how its acceleration matched surrounding traffic. The vehicle seemed to even have mastered the more human-like driving skill of crawling forward at a stop sign to signal its intent.
Only a few typical quirks, like moments of overly cautious traffic spacing and overactive path planning, betrayed the fact that a computer was in control. A more typical rider, specifically one who doesn’t regularly practice their version of the driving Turing Test, might not have even noticed them.
How safe is ‘safe enough’?
Waymo’s decision to put me in a fully driverless car on public roads anywhere speaks to the confidence it puts in its “driver,” but the company was cagey about the specific source of that confidence.
Waymo’s Director of Product Saswat Panigrahi declined to share how many driverless miles Waymo had accumulated in Chandler, or what specific benchmarks proved that its driver was “safe enough” to handle the risk of a fully driverless ride. Citing the firm’s 10 million real-world miles and 10 billion simulation miles, Panigrahi argued that Waymo’s confidence comes from “a holistic picture.”
“Autonomous driving is complex enough not to rely on a singular metric,” Panigrahi said.
It’s a sensible, albeit frustrating, argument, given that the most significant open question hanging over the autonomous drive space is “how safe is safe enough?” Absent more details, it’s hard to say if my driverless ride reflects a significant benchmark in Waymo’s broader technical maturity or simply its confidence in a relatively unchallenging route.
The company’s driverless rides are currently free and only taking place in a geofenced area that includes parts of Chandler, Mesa and Tempe. This driverless territory is smaller than Waymo’s standard domain in the Phoenix suburbs, implying that confidence levels are still highly situational. Even Waymo vehicles with safety drivers don’t yet take riders to one of the most popular ride-hailing destinations: the airport.
The complexities of driverless
Panigrahi deflected questions about the proliferation of driverless rides, saying only that the number has been increasing and will continue to do so. Waymo has about 600 autonomous vehicles in its fleet across all geographies, including Mountain View, Calif. The majority of those vehicles are in Phoenix, according to the company.
However, Panigrahi did reveal that the primary limiting factor is applying what it learned from research into early rider experiences.
“This is an experience that you can’t really learn from someone else,” Panigrahi said. “This is truly new.”
Some of the most difficult challenges of driverless mobility only emerge once riders are combined with the absence of a human behind the wheel. For example, developing the technologies and protocols that allow a driverless Waymo to detect and pull over for emergency response vehicles and even allow emergency services to take over control was a complex task that required extensive testing and collaboration with local authorities.
“This was an entire area that, before doing full driverless, we didn’t have to worry as much about,” Panigrahi said.
The user experience is another crux of driverless ride-hailing. It’s an area to which Waymo has dedicated considerable time and resources — and for good reason. User experience turns out to hold some surprisingly thorny challenges once humans are removed from the equation.
The everyday interactions between a passenger and an Uber or Lyft driver, such as conversations about pick-up and drop-offs as well as sudden changes in plans, become more complex when the driver is a computer. It’s an area that Waymo’s user experience research (UXR) team admits it is still figuring out.
Computers and sensors may already be better than humans at specific driving capabilities, like staying in lanes or avoiding obstacles (especially over long periods of time), but they lack the human flexibility and adaptability needed to be a good mobility provider.
Learning how to either handle or avoid the complexities that humans accomplish with little effort requires a mix of extensive experience and targeted research into areas like behavioral psychology that tech companies can seem allergic to.
Not just a tech problem
Waymo’s early driverless rides mark the beginning of a new phase of development filled with fresh challenges that can’t be solved with technology alone. Research into human behavior, building up expertise in the stochastic interactions of the modern urban curbside, and developing relationships and protocols with local authorities are all deeply time-consuming efforts. These are not challenges that Waymo can simply throw technology at, but require painstaking work by humans who understand other humans.
Some of these challenges are relatively straightforward. For example, it didn’t take long for Waymo to realize that dropping off riders as close to the entrance of a Walmart was actually less convenient due to the high volume of foot traffic. But understanding that pick-up and drop-off isn’t ruled by a single principle (e.g. closer to the entrance is always better) hints at a hidden wealth of complexity that Waymo’s vehicles need to master.
As frustrating as the slow pace of self-driving proliferation is, the fact that Waymo is embracing these challenges and taking the time to address it is encouraging.
The first chapter of autonomous drive technology development was focused on the purely technical challenge of making computers drive. Weaving Waymo’s computer “driver” into the fabric of society requires an understanding of something even more mysterious and complex: people and how they interact with each other and the environment around them.
Given how fundamentally autonomous mobility could impact our society and cities, it’s reassuring to know that one of the technology’s leading developers is taking the time to understand and adapt to them.
As we move from a world dominated by virtual machines to one of serverless, it changes the nature of monitoring, and vendors like New Relic certainly recognize that. This morning the company announced it was acquiring IOpipe, an early-stage Seattle serverless monitoring startup to help beef up its serverless monitoring chops. Terms of the deal weren’t disclosed.
New Relic gets what it calls “key members of the team,” which at least includes co-founders Erica Windisch and Adam Johnson, along with the IOpipe technology. The new employees will be moving from Seattle to New Relic’s Portland offices.
“This deal allows us to make immediate investments in onboarding that will make it faster and simpler for customers to integrate their [serverless] functions with New Relic and get the most out of our instrumentation and UIs that allow fast troubleshooting of complex issues across the entire application stack,” the company wrote in a blog post announcing the acquisition.
It adds that initially the IOpipe team will concentrate on moving AWS Lambda features like Lambda Layers into the New Relic platform. Over time, the team will work on increasing support for Serverless function monitoring. New Relic is hoping by combining the IOpipe team and solution with its own, it can speed up its serverless monitoring chops .
As TechCrunch’s Frederic Lardinois pointed out in his article about the company’s $2.5 million seed round in 2017, Windisch and Johnson bring impressive credentials.
“IOpipe co-founders Adam Johnson (CEO) and Erica Windisch (CTO), too, are highly experienced in this space, having previously worked at companies like Docker and Midokura (Adam was the first hire at Midokura and Erica founded Docker’s security team). They recently graduated from the Techstars NY program,” Lardinois wrote at the time.
The startup has been helping monitor serverless operations for companies running AWS Lambda. It’s important to understand that serverless doesn’t mean that there are no servers, but the cloud vendor — in this case AWS — provides the exact resources to complete an operation and nothing more.
Photo: New Relic
Once the operation ends, the resources can simply get redeployed elsewhere. That makes building monitoring tools for such ephemeral resources a huge challenge. New Relic has also been working on the problem and released New Relic Serverless for AWS Lambda offering earlier this year.
IOpipe was founded in 2015, which was just around the time that Amazon was announcing Lambda. At the time of the seed round the company had eight employees. According to Pitchbook data, it currently has between 1 and 10 employees, and has raised $7.07 million since its inception.
New Relic was founded in 2008 and raised over $214 million, according to Crunchbase, before going public in 2014. Its stock price was $65.42 at the time of publication up $1.40.
It’s almost certain that an angel will play a role in your startup’s journey, but like everything else in a startup’s life, you need to watch out for potential problems. If you don’t manage them properly, these early backers could get in the way of your startup’s success. The advice from investors and founders below will help you navigate the process of raising a successful angel round while avoiding some of the long-term hassles.
James Currier laughs at how little he knew as a first-time founder looking for angel investors in 1999. The more investors the better, he figured — until he had to handle the fallout of an angel round with sixteen backers.
“They want you to talk to their lawyer and their tax accountant and, before you know it, my 16 angels turned into 64 different relationships,” says Currier, a four-time serial entrepreneur and now a managing partner at early-stage venture firm NFX . “It quite quickly became almost a full-time job checking in with them and taking time to hear all their ideas.”
His advice to founders? Keep your circle of investors as small as possible so you can concentrate on what matters. “This is a time to be as focused as you can on product and customers and revenue,” he says. Raising an angel round is a triumphant moment for any fledgling startup. Yet as with every step of a founder’s journey, there are potential landmines along the way. Failing to manage your angels, Currier and others warn, can distract a company and hurt, rather than help, its chances of success. “They can really gum up the works sometimes,” Currier says.
Be selective
The first thing to keep in mind when thinking about raising an angel round is to choose carefully. “I advise clients to be really thoughtful about who they bring into the seed round,” says Ivan Gaviria, a lawyer at Gunderson Dettmer who has been counseling Silicon Valley startups for more than twenty years. “Clients will say to me, ‘Oh, I need this person because they’re connected in this industry and they’re going to get me leads or whatever,’” he says. “But guess what? Everybody’s well-intentioned, but everybody’s also busy.”
In the vast majority of cases, once angels write a check and get their shares, Gaviria says, “they are not going to spend a ton of time adding value to the company.” Yet Gaviria still sees entrepreneurs pursue what he and others call a “party round,” especially when founders have a well-developed network. “I’ve seen 20, 25, 30 parties, all wanting to drop $25,000 or $50,000 in an entrepreneur’s new endeavor,” he says. “They’re trying to do right by their friends and acquaintances and others who want in on the angel round, but they’re also creating headaches for themselves and their lead investors.”
Gaviria gives the example of “pro-rata rights” — the routine guarantee that angel investors can choose to buy a proportionate number of shares in a future round. “Now 20 or 25 or 30 people have to be notified and the paperwork done for each,” he says.
Set expectations and boundaries for communication
An even bigger challenge is time management and navigating relationships with 20 or 30 people to whom you’ll now feel obliged. “There’s a difference with angels between information versus advice and engagement,” Currier says. “As a CEO, you have to explain to angels the difference. Because some angels want to be entertained.” Those investors hope for a financial return, of course, but they also relish the idea of having a front-row seat to your entrepreneurial journey. To make sure that desire doesn’t turn into a burdensome series of check-ins and inopportune “how is it going” calls, NFX gives founders a template for communications with investors. A monthly report helps to set both expectations and boundaries, Currier says.
Slowly but surely, smart speakers are taking over. As Amazon builds Alexa into everything from tiny clocks to microwaves and Google wraps Assistant into just about anything it can, it feels like it’ll be no time before the rooms that don’t have some sort of voice-powered device are the exception.
But most people and businesses probably have no idea how to get their content prepped and ready to play friendly with these speakers. That’s the driving force behind Soundcheck, a company opening up its doors this morning.
Soundcheck helps to take your content and package it up in a format these smart speakers and voice assistant devices more readily understand.
Soundcheck’s primary focus initially is on WordPress -powered sites — not a small target, considering that estimates suggest WordPress powers over 30% of the Internet. They’ve built a plugin that lets you take information on your WordPress site and, in a tap-or-two, wrap up the most important pieces in Google’s “speakable” data format — effectively acting as a highlighter, saying “Hey voice speakers and the search algorithms that power them! This bit of information is meant for you, and answers that question about Topic X”.
Getting data into this format usually means writing custom markup for each page in question, which is something that not everyone (like, say, a small business owner using WordPress mostly for the whole simplified WYSIWYG aspect) is prepped to do. Soundcheck boils the process down to a button press, handles the data validation, and provides a preview of how that content might sound when read aloud by a voice assistant.
Soundcheck will be free for users who just want the basic plugin, with support for their 50 latest WordPress posts. If you need support for more posts, or you want to do fancier things like custom API integrations and tying into dedicated Amazon Alexa/Google Assistant apps, they’ll charge somewhere between $20-$79 a month. The company tells me it’s also building out an analytics tool to help publishers better understand where and when its data is being accessed by voice. They also say that support for other content platforms beyond WordPress is on the roadmap.
Soundcheck is founded by Daniel Tyreus and Narendra Rocherolle — the latter of which also co-founded Webshots, the ultra early photo sharing site that sold to Excite@Home for $82.5M in 1999. They originally set out to build Peck, a service founded in 2016 that aimed to figure out the best way to pull in information on a subject and pack it down into its most concise form. They found that one of the toughest parts of that equation was getting data packaged up and ready for smart speakers like Alexa and Google Home — so they pivoted to focus on that.
The team has raised $1.5M to date, backed by True Ventures, Resolute Ventures, Twitter co-founder Biz Stone, and Flickr co-founder Caterina Fake — along with Automattic, the very team behind WordPress.