Here in the States, UBTECH is probably best know for the crazy-walking Star Wars Stormtrooper it released late last year. But the company’s been producing robotics toys for long enough to get a vote of confidence in the form of a $820 million Series C late last year.
The Chinese company’s latest project takes a decidedly more educational bent, with a plan to teach kids STEM skills with the 410-piece JIMU Robot BuilderBots Series: Overdrive Kit. There are two primary robots — a bulldozer and a dump truck — built with a pair of motors and sensors that react to their environment.
A connected app walks you through the building steps with a 360-degree view and also doubles as a remote control, once they’re built. There’s a coding element on board, as well, designed to teach kids a bit of programming using Google’s Blockly language.
At $120, the kit is fairly reasonably priced — though its got plenty of competition. Every company from LEGO to Sphero is attempting to horn in on the STEM/STEAM coding market, and here in the States at least, the UBTECH name doesn’t carry much water.
The product has decent retail outreach, with availability at Target and Amazon, but seeding the product in schools could go a ways toward helping spread it by word of mouth.
Indigo, the startup bringing algorithms and machine learning to the agricultural industry, has raised a $250 million Series E, bringing its total raised to $650 million.
The funding values the company at $3.5 billion, according to a source familiar with the deal. That’s a steep increase from its previously reported value: $1.4 billion with a $156 million Series D last September. Existing investors Baillie Gifford, the Alaska Permanent Fund, the Investment Corporation of Dubai and Flagship Pioneering participated in the round. New investors, who Indigo declined to name, also participated.
Indigo initially launched in 2014 to help farmers improve the health and productivity of their crops with microbial products that protect against the environment, disease and pest stress. Now, the company is expanding its suite of digital tools with the launch of Indigo Marketplace, which is essentially eBay for farmers.
Indigo CEO David Perry, who grew up on a farm in Arkansas, told TechCrunch Indigo expects to do $500 million in bookings this year thanks to the early growth of the new product. That’s more than 8x growth YoY.
Indigo first began rolling out the online grain marketplace to its network of farmers in June and has since seen more than $6 billion worth of grain put up for sale by farmers. Buyers have placed more than 4,000 bids worth $2 billion.
Perry, a serial entrepreneur — he co-founded Better Therapeutics and Anacor Pharmaceuticals, which was acquired for $5.2 billion by Pfizer — says Indigo’s marketplace has had the most rapid adoption of any product launch he’s been associated with in his career.
“It’s part of a bigger plan,” he said. “We knew microbiology was a cornerstone to changing agriculture but it couldn’t do it by itself.”
Using Indigo Marketplace, buyers can purchase grain directly from farmers. They can filter by protein content, milling quality or by production practices, i.e. whether it’s organic or not. Growers are paid based on the quality of their crop, which is determined by a sample sent to a third-party lab for analysis.
Indigo manages the testing, the transportation of the crop and the payment through its new platform.
Venture capital investment in agtech has been steadily growing in the last decade with more and more early-stage startups emerging to compete with the Big 6 of agriculture: Monsanto, Bayer, DuPont, Dow, Syngenta and BASF.
The industry, according to agtech firm Finistere Ventures, has transitioned “from a niche, opportunistic clade of the venture capital investment class, to a legitimate asset class attracting focused and generalist funds.”
As for Indigo, it has been and continues to be the most valuable private agtech startup.
Mabl, a Boston-based startup from the folks who brought you Stackdriver, wants to change software testing using machine learning, and today it announced a $20 million Series B investment led by GV (formerly Google Ventures).
Existing investors CRV and Amplify Partners also participated. As part of the deal, Karim Faris, general partner at GV will be joining the Mabl board. Today’s investment comes on top of a $10 million Series A announced in February.
While it was at it, the company also announced a brand new enterprise product. In fact, part of the reason for going for a hefty Series B so soon after landing the Series A was because it takes some money to service enterprise clients, company founder Izzy Azeri explained.
Azeri says that when he and his partner Dan Belcher decided to start a new company after selling Stackdriver to Google in 2014, they wanted to be methodical about it. They did some research to find gaps and pain points the new company could address. What they found was that QA wasn’t keeping up with modern development speed.
They saw development and testing teams spending too much time simply maintaining the testing regimen, and they believed with machine learning they could help automate the QA process and deliver it in the form of a cloud service, allowing testing to keep up.
Instead of looking at the code level, Mabl looks at your website or service and alerts you to errors like increased load time, broken links or other problems, and displays the results in a dashboard. When it finds an issue, it flags the step in the process where the problem occured and sends a screenshot to the test or development team where they can analyze it and fix it if needed.
Mabl dashboard. Screenshot: Mabl
They launched in Beta last February and went GA in May. Since then, they were pleasantly surprised to find that larger companies were interested in their service and they knew they needed to beef up the base product to appeal to these customers.
That meant adding secure tunneling, which they call Mabl Link, a higher level of encryption, support for cross-browser testing and integration with enterprise single sign-on. They also needed a higher level of support and training, which are also part of the enterprise package.
They let each customer try the full suite of features when they sign up for 21 days, after which they can drop down to Pro or sign up for the enterprise version, depending on their budgets and requirements.
Mabl currently has 30 employees in Boston, and as they develop the enterprise business, the plan is bring that up to 70 in the next year as they add enterprise sales people, customer success staff and of course more engineering to keep building the product.
Roman is a rocket ship, and I’m not talking about how it sells Viagra and Cialis. Less than a year after launching its cloud pharmacy for erectile dysfunction with $3 million in funding and a five-person team, Roman has grown to seventy team members and a revenue run-rate in the 10s of millions — up 720 percent since January. It’s sparked over a million patient-physician visits, phone calls, and text conversations through its telemedicine portal for getting diagnoses and prescriptions.
And now Roman is ready to expand beyond men, so it’s dropping the ‘Man.
Today, the newly renamed ‘Ro’ unveiled its next product, Zero, a $129 ‘quit smoking’ kit. It contains a month’s worth of prescription cessation medication bupropion and nicotine gum, plus an app for tracking progress and learning how to stay motivated through hunger, nausea, and cravings. Pre-orders open today.
“Erectile dysfunction medication is a knee brace. It helps you to walk again but the goal would be to not need a knee brace” says Ro co-founder Zachariah Reitano, who started the company because he lives with ED himself due to a heart medication side effect. “Some people will need ED medication but we’re hoping that a lot of people, through lifestyle changes or quitting smoking, won’t need us any more.”
To get the word out about Zero to women and men alike, as well as build a physician’s electronic medical record system, Ro has also raised a jaw dropping $88 million Series A round. It was led by FirstMark Capital and joined by SignalFire, Initialized Capital, General Catalyst, Slow Ventures, Sinai Ventures, Torch Capital, BoxGroup, and Tusk Ventures. Initialized and Reddit co-founder Alexis Ohanian and FirstMark managing director Rick Heitzmann will both join Ro’s board to steward this massive infusion of capital.
Roman board member Alexia Ohanian sporting a Roman Zero hat while cheering on his wife, tennis star Serena Williams
“The plan for the money is to continue to build out our own pharmacy” as well as “a lot of the backend infrastructure that we call ‘Ro’ that will allow us to launch these other products and verticals over the next two to three years, including women’s health products, Reitano tells me. Ohanian writes that “The only thing that exceeds Ro’s execution to date is their vision for the future of healthcare. Unlike other companies in the space, Ro is full-stack and is actually rebuilding the health care experience from the ground up, which means they are able to deliver unrivaled care for patients across the country.”
Ro’s Zero kit
Until recently, 80 percent of Viagra sold online was counterfeit. That not only made it awkward to buy medication for erectile dysfunction, but also dangerous. Yet that number is starting to drop thanks to the explosion in popularity of Roman, as well as fellow direct-to-consumer men’s health startup Hims. “Roman doesn’t lend itself to the typical Instagram unboxing experience, but we get a lot of one-to-one word of mouth” Reitano says with a chuckle. SEO has also been key to revenue growth, as its the first organic search result for ‘buy Viagra’.
“One of the thing that’s helped has been me sharing my story [he’s dealt with ED since he was 17], and this ‘check engine light’ concept” that views erections as indicators that a man’s body is in working order. Roman even built a somewhat-silly app called Morning Glory to help men track morning erections. Roman’s whole experience is designed to make patients comfortable with a fundamentally uncomfortable topic. “The fact that this stigma exists is why people don’t talk to their doctor or their partner” Reitano says.
Roman co-founder Zachariah Reitano
Now Ro wants to take the same clear-eyed approach to helping people quit smoking, starting by getting you to chat with its “telehealth assistant” to get paperwork sorted before you speak with a Ro doctor. The startup says that of the 37.5 million people in the US who smoke, 70 percent want to quit and 50 percent try to quit each year, but only 3 to 5 percent are smoke-free after six months. But with medication, nicotine replacement therapy like gum, tapering down smoking before stopping, and counseling, the quit rate drastically improves to 33 percent after six months.
You get all that from Zero’s kit for $129 per month, compared to $120 on Amazon for just the nicotine gum. Reitano admits that “the margin actually is not fantastic to start. Let’s say it’s slightly below what a typical commerce purchase would be.” But the idea is that if Ro and Zero can help someone quit smoking, patients will turn to it for more of their online pharmacy needs.
One barrier for Ro is that it currently doesn’t accept insurance for its $15 telemedicine appointments, Roman pills, or the Zero kit. Eventually it wants to accept FSA cards for tax-favored spending in hopes of reducing the cost for some patients, but otherwise Ro will require people to pay out-of-pocket, restricting it to wealthier segment of the population. Reitano admits that “In any space that’s incredibly competitive and highly regulated, there are things out of your control. In our control, there’s an incredible opportunity to make sure we take advantage of the infrastructure we have.”
Reitano concludes, “Honestly,I hope we can live up to what we want to build.”
I’m not going to lie, when Studio Banana released the original Ostrichpillow back in 2012, despite breaking all Kickstarter records at the time, I thought the whole thing might be an elaborate joke. Or, worse still, since the sleep-at-your-desk styled product had found popularity amongst people who worked at startups, Silicon Valley was now parodying itself.
Except that “transformative” design company Studio Banana is based in Europe, with offices based in London, Lake Geneva and Madrid. And 500,000 sales and five products later, the joke is arguably on its critics. As I’m fond of telling founders who ask for validation, ultimately it is the market that decides.
Enter the latest Ostrichpillow creation: the aptly named Ostrichpillow Hood. Aptly named because, well, it’s a hood. However, unlike the previous products in the range, which were designed to facilitate sleep in non-traditional places, the Ostrichpillow Hood, we’re told, is to be used in “everyday waking life”.
Specifically, by reducing the ability to see activity in the edges of your field of vision, it is intended to help you focus on the task at hand and/or reduce overstimulation, such as the kind induced by open plan co-working spaces.
The Ostrichpillow Original
“The product we’re launching now is the sixth of the different products that have emerged in the Ostrichpillow family and they’re catering to different needs,” Ali Ganjavian, co-founder of Studio Banana, tells me in a video call yesterday. “Ostrichpillow was really about complete isolation and it was really a statement product… So different products have different use-cases and different functions, and also different social acceptances”.
I suggest that the Ostrichpillow Hood may turn out to be broadly socially acceptable, not least for anyone already familiar with the original Ostrichpillow, but also because asking work colleagues to respect the need to focus is a lot different to asking them to ignore your need to take a nap at your desk. Ganjavian doesn’t degree, even though there is no doubt the two products share the same design heritage.
“A lot of the stuff we are thinking about now is about the state of mind,” he says, noting that throughout the working day we are bombarded with stimuli and information, from messaging apps, emails, social media, meetings and even something as innocuous as having to say hello to work mates. “[The Ostrichpillow Hood] is really about sheltering. It is not only a physical movement, there is psychology in the way it shelters you… it’s about shifting your mood”.
Next Ganjavian demonstrates the three positions the Ostrichpillow Hood is designed to be worn.
The ‘Hood’ position is for when you need to concentrate on something in public, for example when commuting or in an open plan office or coffee shop. Like wearing a pair of visually loud headphones, it also has the added effect of signalling to colleagues that you’d rather not be disturbed or are “wired in“.
The ‘Eclipse’ position, where the hood can be turned around to cover your face completely, is for when you need to truly switch off from your surroundings, such as to deeply think, take a short break or meditate. “If I’ve got my headphones on in that posture then what it allows me to do is to totally isolate myself in the same way I would with an Ostrichpillow but in a much more acceptable way,” says Ganjavian.
Finally, the ‘Hoop’ position, with the hood worn down around your neck, is designed to feel warm and cozy and turns the Ostrichpillow Hood into attire more akin to a fashion accessory.
Adds the Studio Banana co-founder: “What I find really exciting about this moment is that I currently work in between three different geographies, there is so much going on, and how do we create a tool or object that makes me feel good, helps me perform better, and helps me become more efficient, and also feeds that overall well-being that I’m looking for in my workplace. At the same time, I can just walk out into the street with it on and just go home and feel good about it”.
The Seattle-based on-demand delivery and moving service, Dolly, is rolling out its services in San Francisco and Washington, DC, as it eyes a broader national expansion in the coming months after hitting a milestone of $1 million in monthly revenue earlier this year.
What separates Dolly from moving services like the recently launched Moved, it its focus on single pieces of furniture or select small items that can fit in the back of a pickup truck. Dolly isn’t for big house moves, but rather getting that couch, cabinet, or dresser from the furniture store or the flea market to your home or apartment.
The company currently has 5,000 contracted movers or delivery representatives on its platform and is hoping to add between 200 and 400 more in San Francisco and DC (along with planned expansions into Los Angeles and Orange County) within the next six months.
The company said it hopes to hit the top 30 U.S. markets by the summer of 2019.
“Over the past four years we’ve learned a lot about what works and what doesn’t in terms of the markets we serve,” says Mike Howell, Dolly’s chief executive, in a statement. “We could have expanded a long time ago but we wanted to be smart about where and when to grow the business. Our approach has been intentionally methodical and data-driven.”
Dolly attributes its growth to partnerships with retailers like Costco, Big Lots, Lowe’s and Crate & Barrel, and that data-driven approach which has seen the startup analyze more than 100,000 moves in the markets that it enters.
“We understand that most people do not need a Dolly every day, every week or even every month,” says Howell. “As a result it doesn’t make sense for Dolly to heavily subsidize a first use-case like many subscription businesses do.
Dolly was launched in Chicago in 2013 by co-founders Howell, Jason Norris, Kelby Hawn and Chad Wittman. the platform uses an algorithm to set prices for jobs with initial prices ranging between $50 to $85.
And while the company is hoping to bring the on-demand marketplace model to moving services, it’s running into some of the same problems that have hit larger marketplaces like Lyft and Uber in ride-sharing and Airbnb in rentals.
The company has been exchanging lawsuits with Washington state over whether it needs to apply for a license to operate its services in the region. Washington says that Dolly needs to adhere to regulations governing moving services, while Dolly says the laws on the books are archaic and haven’t kept up with changes in consumer behavior and demand.
No matter the outcome, Dolly is pushing ahead with a nationwide rollout. As logistics and last mile delivery become more important, demand for services like Dolly’s should increase.
The iPhone XS proves one thing definitively: that the iPhone X was probably one of the most ambitious product bets of all time.
When Apple told me in 2017 that they put aside plans for the iterative upgrade that they were going to ship and went all in on the iPhone X because they thought they could jump ahead a year, they were not blustering. That the iPhone XS feels, at least on the surface, like one of Apple’s most “S” models ever is a testament to how aggressive the iPhone X timeline was.
I think there will be plenty of people who will see this as a weakness of the iPhone XS, and I can understand their point of view. There are about a half-dozen definitive improvements in the XS over the iPhone X, but none of them has quite the buzzword-worthy effectiveness of a marquee upgrade like 64-bit, 3D Touch or wireless charging — all benefits delivered in previous “S” years.
That weakness, however, is only really present if you view it through the eyes of the year-over-year upgrader. As an upgrade over an iPhone X, I’d say you’re going to have to love what they’ve done with the camera to want to make the jump. As a move from any other device, it’s a huge win and you’re going head-first into sculpted OLED screens, face recognition and super durable gesture-first interfaces and a bunch of other genre-defining moves that Apple made in 2017, thinking about 2030, while you were sitting back there in 2016.
Since I do not have an iPhone XR, I can’t really make a call for you on that comparison, but from what I saw at the event and from what I know about the tech in the iPhone XS and XS Max from using them over the past week, I have some basic theories about how it will stack up.
For those with interest in the edge of the envelope, however, there is a lot to absorb in these two new phones, separated only by size. Once you begin to unpack the technological advancements behind each of the upgrades in the XS, you begin to understand the real competitive edge and competence of Apple’s silicon team, and how well they listen to what the software side needs now and in the future.
Whether that makes any difference for you day to day is another question, one that, as I mentioned above, really lands on how much you like the camera.
But first, let’s walk through some other interesting new stuff.
Notes on durability
As is always true with my testing methodology, I treat this as anyone would who got a new iPhone and loaded an iCloud backup onto it. Plenty of other sites will do clean room testing if you like comparison porn, but I really don’t think that does most folks much good. By and large most people aren’t making choices between ecosystems based on one spec or another. Instead, I try to take them along on prototypical daily carries, whether to work for TechCrunch, on vacation or doing family stuff. A foot injury precluded any theme parks this year (plus, I don’t like to be predictable) so I did some office work, road travel in the center of California and some family outings to the park and zoo. A mix of uses cases that involves CarPlay, navigation, photos and general use in a suburban environment.
In terms of testing locale, Fresno may not be the most metropolitan city, but it’s got some interesting conditions that set it apart from the cities where most of the iPhones are going to end up being tested. Network conditions are pretty adverse in a lot of places, for one. There’s a lot of farmland and undeveloped acreage and not all of it is covered well by wireless carriers. Then there’s the heat. Most of the year it’s above 90 degrees Fahrenheit and a good chunk of that is spent above 100. That means that batteries take an absolute beating here and often perform worse than other, more temperate, places like San Francisco. I think that’s true of a lot of places where iPhones get used, but not so much the places where they get reviewed.
That said, battery life has been hard to judge. In my rundown tests, the iPhone XS Max clearly went beast mode, outlasting my iPhone X and iPhone XS. Between those two, though, it was tougher to tell. I try to wait until the end of the period I have to test the phones to do battery stuff so that background indexing doesn’t affect the numbers. In my ‘real world’ testing in the 90+ degree heat around here, iPhone XS did best my iPhone X by a few percentage points, which is what Apple does claim, but my X is also a year old. I didn’t fail to get through a pretty intense day of testing with the XS once though.
In terms of storage I’m tapping at the door of 256GB, so the addition of 512GB option is really nice. As always, the easiest way to determine what size you should buy is to check your existing free space. If you’re using around 50% of what your phone currently has, buy the same size. If you’re using more, consider upgrading because these phones are only getting faster at taking better pictures and video and that will eat up more space.
The review units I was given both had the new gold finish. As I mentioned on the day, this is a much deeper, brassier gold than the Apple Watch Edition. It’s less ‘pawn shop gold’ and more ‘this is very expensive’ gold. I like it a lot, though it is hard to photograph accurately — if you’re skeptical, try to see it in person. It has a touch of pink added in, especially as you look at the back glass along with the metal bands around the edges. The back glass has a pearlescent look now as well, and we were told that this is a new formulation that Apple created specifically with Corning. Apple says that this is the most durable glass ever in a smartphone.
My current iPhone has held up to multiple falls over 3 feet over the past year, one of which resulted in a broken screen and replacement under warranty. Doubtless multiple YouTubers will be hitting this thing with hammers and dropping it from buildings in beautiful Phantom Flex slo-mo soon enough. I didn’t test it. One thing I am interested in seeing develop, however, is how the glass holds up to fine abrasions and scratches over time.
My iPhone X is riddled with scratches both front and back, something having to do with the glass formulation being harder, but more brittle. Less likely to break on impact but more prone to abrasion. I’m a dedicated no-caser, which is why my phone looks like it does, but there’s no way for me to tell how the iPhone XS and XS Max will hold up without giving them more time on the clock. So I’ll return to this in a few weeks.
Both the gold and space grey iPhones XS have been subjected to a coating process called physical vapor deposition or PVD. Basically metal particles get vaporized and bonded to the surface to coat and color the band. PVD is a process, not a material, so I’m not sure what they’re actually coating these with, but one suggestion has been Titanium Nitride. I don’t mind the weathering that has happened on my iPhone X band, but I think it would look a lot worse on the gold, so I’m hoping that this process (which is known to be incredibly durable and used in machine tooling) will improve the durability of the band. That said, I know most people are not no-casers like me so it’s likely a moot point.
Now let’s get to the nut of it: the camera.
Bokeh let’s do it
I’m (still) not going to be comparing the iPhone XS to an interchangeable lens camera because portrait mode is not a replacement for those, it’s about pulling them out less. That said, this is closest its ever been.
One of the major hurdles that smartphone cameras have had to overcome in their comparisons to cameras with beautiful glass attached is their inherent depth of focus. Without getting too into the weeds (feel free to read this for more), because they’re so small, smartphone cameras produce an incredibly compressed image that makes everything sharp. This doesn’t feel like a portrait or well composed shot from a larger camera because it doesn’t produce background blur. That blur was added a couple of years ago with Apple’s portrait mode and has been duplicated since by every manufacturer that matters — to varying levels of success or failure.
By and large, most manufacturers do it in software. They figure out what the subject probably is, use image recognition to see the eyes/nose/mouth triangle is, build a quick matte and blur everything else. Apple does more by adding the parallax of two lenses OR the IR projector of the TrueDepth array that enables Face ID to gather a 9-layer depth map.
As a note, the iPhone XR works differently, and with less tools, to enable portrait mode. Because it only has one lens it uses focus pixels and segmentation masking to ‘fake’ the parallax of two lenses.
With the iPhone XS, Apple is continuing to push ahead with the complexity of its modeling for the portrait mode. The relatively straightforward disc blur of the past is being replaced by a true bokeh effect.
Background blur in an image is related directly to lens compression, subject-to-camera distance and aperture. Bokeh is the character of that blur. It’s more than just ‘how blurry’, it’s the shapes produced from light sources, the way they change throughout the frame from center to edges, how they diffuse color and how they interact with the sharp portions of the image.
Bokeh is to blur what seasoning is to a good meal. Unless you’re the chef, you probably don’t care what they did you just care that it tastes great.
Well, Apple chef-ed it the hell up with this. Unwilling to settle for a templatized bokeh that felt good and leave it that, the camera team went the extra mile and created an algorithmic model that contains virtual ‘characteristics’ of the iPhone XS’s lens. Just as a photographer might pick one lens or another for a particular effect, the camera team built out the bokeh model after testing a multitude of lenses from all of the classic camera systems.
I keep saying model because it’s important to emphasize that this is a living construct. The blur you get will look different from image to image, at different distances and in different lighting conditions, but it will stay true to the nature of the virtual lens. Apple’s bokeh has a medium-sized penumbra, spreading out light sources but not blowing them out. It maintains color nicely, making sure that the quality of light isn’t obscured like it is with so many other portrait applications in other phones that just pick a spot and create a circle of standard gaussian or disc blur.
Check out these two images, for instance. Note that when the light is circular, it retains its shape, as does the rectangular light. It is softened and blurred, as it would when diffusing through the widened aperture of a regular lens. The same goes with other shapes in reflected light scenarios.
Now here’s the same shot from an iPhone X, note the indiscriminate blur of the light. This modeling effort is why I’m glad that the adjustment slider proudly carries f-stop or aperture measurements. This is what this image would look like at a given aperture, rather than a 0-100 scale. It’s very well done and, because it’s modeled, it can be improved over time. My hope is that eventually, developers will be able to plug in their own numbers to “add lenses” to a user’s kit.
And an adjustable depth of focus isn’t just good for blurring, it’s also good for un-blurring. This portrait mode selfie placed my son in the blurry zone because it focused on my face. Sure, I could turn the portrait mode off on an iPhone X and get everything sharp, but now I can choose to “add” him to the in-focus area while still leaving the background blurry. Super cool feature I think is going to get a lot of use.
It’s also great for removing unwanted people or things from the background by cranking up the blur.
And yes, it works on non humans.
If you end up with an iPhone XS, I’d play with the feature a bunch to get used to what a super wide aperture lens feels like. When its open all the way to f1.4 (not the actual widest aperture of the lens btw, this is the virtual model we’re controlling) pretty much only the eyes should be in focus. Ears, shoulders, maybe even nose could be out of the focus area. It takes some getting used to but can produce dramatic results.
A 150% crop of a larger photo to show detail preservation.
Developers do have access to one new feature though, the segmentation mask. This is a more precise mask that aids in edge detailing, improving hair and fine line detail around the edges of a portrait subject. In my testing it has led to better handling of these transition areas and less clumsiness. It’s still not perfect, but it’s better. And third-party apps like Halide are already utilizing it. Halide’s co-creator, Sebastiaan de With, says they’re already seeing improvements in Halide with the segmentation map.
“Segmentation is the ability to classify sets of pixels into different categories,” says de With. “This is different than a “Hot dog, not a hot dog” problem, which just tells you whether a hot dog exists anywhere in the image. With segmentation, the goal is drawing an outline over just the hot dog. It’s an important topic with self driving cars, because it isn’t enough to tell you there’s a person somewhere in the image. It needs to know that person is directly in front of you. On devices that support it, we use PEM as the authority for what should stay in focus. We still use the classic method on old devices (anything earlier than iPhone 8), but the quality difference is huge.
The above is an example shot in Halide that shows the image, the depth map and the segmentation map.
In the example below, the middle black-and-white image is what was possible before iOS 12. Using a handful of rules like, “Where did the user tap in the image?” We constructed this matte to apply our blur effect. It’s no bad by any means, but compare it to the image on the right. For starters, it’s much higher resolution, which means the edges look natural.
My testing of portrait mode on the iPhone XS says that it is massively improved, but that there are still some very evident quirks that will lead to weirdness in some shots like wrong things made blurry and halos of light appearing around subjects. It’s also not quite aggressive enough on foreground objects — those should blur too but only sometimes do. But the quirks are overshadowed by the super cool addition of the adjustable background blur. If conditions are right it blows you away. But every once in a while you still get this sense like the Neural Engine just threw up its hands and shrugged.
Live preview of the depth control in the camera view is not in iOS 12 at the launch of the iPhone XS, but it will be coming in a future version of iOS 12 this fall.
I also shoot a huge amount of photos with the telephoto lens. It’s closer to what you’d consider to be a standard lens on a camera. The normal lens is really wide and once you acclimate to the telephoto you’re left wondering why you have a bunch of pictures of people in the middle of a ton of foreground and sky. If you haven’t already, I’d say try defaulting to 2x for a couple of weeks and see how you like your photos. For those tight conditions or really broad landscapes you can always drop it back to the wide. Because of this, any iPhone that doesn’t have a telephoto is a basic non-starter for me, which is going to be one of the limiters on people moving to iPhone XR from iPhone X, I believe. Even iPhone 8 Plus users who rely on the telephoto I believe will miss it if they don’t go to the XS.
But, man, Smart HDR is where it’s at
I’m going to say something now that is surely going to cause some Apple followers to snort, but it’s true. Here it is:
For a company as prone to hyperbole and Maximum Force Enthusiasm about its products, I think that they have dramatically undersold how much improved photos are from the iPhone X to the iPhone XS. It’s extreme, and it has to do with a technique Apple calls Smart HDR.
Smart HDR on the iPhone XR encompasses a bundle of techniques and technology including highlight recovery, rapid-firing the sensor, an OLED screen with much improved dynamic range and the Neural Engine/image signal processor combo. It’s now running faster sensors and offloading some of the work to the CPU, which enables firing off nearly two images for every one it used to in order to make sure that motion does not create ghosting in HDR images, it’s picking the sharpest image and merging the other frames into it in a smarter way and applying tone mapping that produces more even exposure and color in the roughest of lighting conditions.
iPhone XS shot, better range of tones, skintone and black point
iPhone X Shot, not a bad image at all, but blocking up of shadow detail, flatter skin tone and blue shift
Nearly every image you shoot on an iPhone XS or iPhone XS Max will have HDR applied to it. It does it so much that Apple has stopped labeling most images with HDR at all. There’s still a toggle to turn Smart HDR off if you wish, but by default it will trigger any time it feels it’s needed.
And that includes more types of shots that could not benefit from HDR before. Panoramic shots, for instance, as well as burst shots, low light photos and every frame of Live Photos is now processed.
The results for me have been massively improved quick snaps with no thought given to exposure or adjustments due to poor lighting. Your camera roll as a whole will just suddenly start looking like you’re a better picture taker, with no intervention from you. All of this is capped off by the fact that the OLED screens in the iPhone XS and XS Max have a significantly improved ability to display a range of color and brightness. So images will just plain look better on the wider gamut screen, which can display more of the P3 color space.
Under the hood
As far as Face ID goes, there has been no perceivable difference for me in speed or number of positives, but my facial model has been training on my iPhone X for a year. It’s starting fresh on iPhone XS. And I’ve always been lucky that Face ID has just worked for me most of the time. The gist of the improvements here are jumps in acquisition times and confirmation of the map to pattern match. There is also supposed to be improvements in off-angle recognition of your face, say when lying down or when your phone is flat on a desk. I tried a lot of different positions here and could never really definitively say that iPhone XS was better in this regard, though as I said above, it very likely takes training time to get it near the confidence levels that my iPhone X has stored away.
In terms of CPU performance the world’s first at-scale 7nm architecture has paid dividends. You can see from the iPhone XS benchmarks that it compares favorably to fast laptops and easily exceeds iPhone X performance.
The Neural Engine and better A12 chip has meant for better frame rates in intense games and AR, image searches, some small improvement in app launches. One easy way to demonstrate this is the video from the iScape app, captured on an iPhone X and an iPhone XS. You can see how jerky and FPS challenged the iPhone X is in a similar AR scenario. There is so much more overhead for AR experiences I know developers are going to be salivating for what they can do here.
The stereo sound is impressive, surpassingly decent separation for a phone and definitely louder. The tradeoff is that you get asymmetrical speaker grills so if that kind of thing annoys you you’re welcome.
Upgrade or no
Every other year for the iPhone I see and hear the same things — that the middle years are unimpressive and not worthy of upgrading. And I get it, money matters, phones are our primary computer and we want the best bang for our buck. This year, as I mentioned at the outset, the iPhone X has created its own little pocket of uncertainty by still feeling a bit ahead of its time.
I don’t kid myself into thinking that we’re going to have an honest discussion about whether you want to upgrade from the iPhone X to iPhone XS or not. You’re either going to do it because you want to or you’re not going to do it because you don’t feel it’s a big enough improvement.
And I think Apple is completely fine with that because iPhone XS really isn’t targeted at iPhone X users at all, it’s targeted at the millions of people who are not on a gesture-first device that has Face ID. I’ve never been one to recommend someone upgrade every year anyway. Every two years is more than fine for most folks — unless you want the best camera, then do it.
And, given that Apple’s fairly bold talk about making sure that iPhones last as long as they can, I think that it is well into the era where it is planning on having a massive installed user base that rents iPhones from it on a monthly or yearly or biennial period. And it doesn’t care whether those phones are on their first, second or third owner, because that user base will need for-pay services that Apple can provide. And it seems to be moving in that direction already, with phones as old as the five-year-old iPhone 5s still getting iOS updates.
With the iPhone XS, we might just be seeing the true beginning of the iPhone-as-a-service era.
The iPhone XS proves one thing definitively: that the iPhone X was probably one of the most ambitious product bets of all time.
When Apple told me in 2017 that they put aside plans for the iterative upgrade that they were going to ship and went all in on the iPhone X because they thought they could jump ahead a year, they were not blustering. That the iPhone XS feels, at least on the surface, like one of Apple’s most “S” models ever is a testament to how aggressive the iPhone X timeline was.
I think there will be plenty of people who will see this as a weakness of the iPhone XS, and I can understand their point of view. There are about a half-dozen definitive improvements in the XS over the iPhone X, but none of them has quite the buzzword-worthy effectiveness of a marquee upgrade like 64-bit, 3D Touch or wireless charging — all benefits delivered in previous “S” years.
That weakness, however, is only really present if you view it through the eyes of the year-over-year upgrader. As an upgrade over an iPhone X, I’d say you’re going to have to love what they’ve done with the camera to want to make the jump. As a move from any other device, it’s a huge win and you’re going head-first into sculpted OLED screens, face recognition and super durable gesture-first interfaces and a bunch of other genre-defining moves that Apple made in 2017, thinking about 2030, while you were sitting back there in 2016.
Since I do not have an iPhone XR, I can’t really make a call for you on that comparison, but from what I saw at the event and from what I know about the tech in the iPhone XS and XS Max from using them over the past week, I have some basic theories about how it will stack up.
For those with interest in the edge of the envelope, however, there is a lot to absorb in these two new phones, separated only by size. Once you begin to unpack the technological advancements behind each of the upgrades in the XS, you begin to understand the real competitive edge and competence of Apple’s silicon team, and how well they listen to what the software side needs now and in the future.
Whether that makes any difference for you day to day is another question, one that, as I mentioned above, really lands on how much you like the camera.
But first, let’s walk through some other interesting new stuff.
Notes on durability
As is always true with my testing methodology, I treat this as anyone would who got a new iPhone and loaded an iCloud backup onto it. Plenty of other sites will do clean room testing if you like comparison porn, but I really don’t think that does most folks much good. By and large most people aren’t making choices between ecosystems based on one spec or another. Instead, I try to take them along on prototypical daily carries, whether to work for TechCrunch, on vacation or doing family stuff. A foot injury precluded any theme parks this year (plus, I don’t like to be predictable) so I did some office work, road travel in the center of California and some family outings to the park and zoo. A mix of uses cases that involves CarPlay, navigation, photos and general use in a suburban environment.
In terms of testing locale, Fresno may not be the most metropolitan city, but it’s got some interesting conditions that set it apart from the cities where most of the iPhones are going to end up being tested. Network conditions are pretty adverse in a lot of places, for one. There’s a lot of farmland and undeveloped acreage and not all of it is covered well by wireless carriers. Then there’s the heat. Most of the year it’s above 90 degrees Fahrenheit and a good chunk of that is spent above 100. That means that batteries take an absolute beating here and often perform worse than other, more temperate, places like San Francisco. I think that’s true of a lot of places where iPhones get used, but not so much the places where they get reviewed.
That said, battery life has been hard to judge. In my rundown tests, the iPhone XS Max clearly went beast mode, outlasting my iPhone X and iPhone XS. Between those two, though, it was tougher to tell. I try to wait until the end of the period I have to test the phones to do battery stuff so that background indexing doesn’t affect the numbers. In my ‘real world’ testing in the 90+ degree heat around here, iPhone XS did best my iPhone X by a few percentage points, which is what Apple does claim, but my X is also a year old. I didn’t fail to get through a pretty intense day of testing with the XS once though.
In terms of storage I’m tapping at the door of 256GB, so the addition of 512GB option is really nice. As always, the easiest way to determine what size you should buy is to check your existing free space. If you’re using around 50% of what your phone currently has, buy the same size. If you’re using more, consider upgrading because these phones are only getting faster at taking better pictures and video and that will eat up more space.
The review units I was given both had the new gold finish. As I mentioned on the day, this is a much deeper, brassier gold than the Apple Watch Edition. It’s less ‘pawn shop gold’ and more ‘this is very expensive’ gold. I like it a lot, though it is hard to photograph accurately — if you’re skeptical, try to see it in person. It has a touch of pink added in, especially as you look at the back glass along with the metal bands around the edges. The back glass has a pearlescent look now as well, and we were told that this is a new formulation that Apple created specifically with Corning. Apple says that this is the most durable glass ever in a smartphone.
My current iPhone has held up to multiple falls over 3 feet over the past year, one of which resulted in a broken screen and replacement under warranty. Doubtless multiple YouTubers will be hitting this thing with hammers and dropping it from buildings in beautiful Phantom Flex slo-mo soon enough. I didn’t test it. One thing I am interested in seeing develop, however, is how the glass holds up to fine abrasions and scratches over time.
My iPhone X is riddled with scratches both front and back, something having to do with the glass formulation being harder, but more brittle. Less likely to break on impact but more prone to abrasion. I’m a dedicated no-caser, which is why my phone looks like it does, but there’s no way for me to tell how the iPhone XS and XS Max will hold up without giving them more time on the clock. So I’ll return to this in a few weeks.
Both the gold and space grey iPhones XS have been subjected to a coating process called physical vapor deposition or PVD. Basically metal particles get vaporized and bonded to the surface to coat and color the band. PVD is a process, not a material, so I’m not sure what they’re actually coating these with, but one suggestion has been Titanium Nitride. I don’t mind the weathering that has happened on my iPhone X band, but I think it would look a lot worse on the gold, so I’m hoping that this process (which is known to be incredibly durable and used in machine tooling) will improve the durability of the band. That said, I know most people are not no-casers like me so it’s likely a moot point.
Now let’s get to the nut of it: the camera.
Bokeh let’s do it
I’m (still) not going to be comparing the iPhone XS to an interchangeable lens camera because portrait mode is not a replacement for those, it’s about pulling them out less. That said, this is closest its ever been.
One of the major hurdles that smartphone cameras have had to overcome in their comparisons to cameras with beautiful glass attached is their inherent depth of focus. Without getting too into the weeds (feel free to read this for more), because they’re so small, smartphone cameras produce an incredibly compressed image that makes everything sharp. This doesn’t feel like a portrait or well composed shot from a larger camera because it doesn’t produce background blur. That blur was added a couple of years ago with Apple’s portrait mode and has been duplicated since by every manufacturer that matters — to varying levels of success or failure.
By and large, most manufacturers do it in software. They figure out what the subject probably is, use image recognition to see the eyes/nose/mouth triangle is, build a quick matte and blur everything else. Apple does more by adding the parallax of two lenses OR the IR projector of the TrueDepth array that enables Face ID to gather a 9-layer depth map.
As a note, the iPhone XR works differently, and with less tools, to enable portrait mode. Because it only has one lens it uses focus pixels and segmentation masking to ‘fake’ the parallax of two lenses.
With the iPhone XS, Apple is continuing to push ahead with the complexity of its modeling for the portrait mode. The relatively straightforward disc blur of the past is being replaced by a true bokeh effect.
Background blur in an image is related directly to lens compression, subject-to-camera distance and aperture. Bokeh is the character of that blur. It’s more than just ‘how blurry’, it’s the shapes produced from light sources, the way they change throughout the frame from center to edges, how they diffuse color and how they interact with the sharp portions of the image.
Bokeh is to blur what seasoning is to a good meal. Unless you’re the chef, you probably don’t care what they did you just care that it tastes great.
Well, Apple chef-ed it the hell up with this. Unwilling to settle for a templatized bokeh that felt good and leave it that, the camera team went the extra mile and created an algorithmic model that contains virtual ‘characteristics’ of the iPhone XS’s lens. Just as a photographer might pick one lens or another for a particular effect, the camera team built out the bokeh model after testing a multitude of lenses from all of the classic camera systems.
I keep saying model because it’s important to emphasize that this is a living construct. The blur you get will look different from image to image, at different distances and in different lighting conditions, but it will stay true to the nature of the virtual lens. Apple’s bokeh has a medium-sized penumbra, spreading out light sources but not blowing them out. It maintains color nicely, making sure that the quality of light isn’t obscured like it is with so many other portrait applications in other phones that just pick a spot and create a circle of standard gaussian or disc blur.
Check out these two images, for instance. Note that when the light is circular, it retains its shape, as does the rectangular light. It is softened and blurred, as it would when diffusing through the widened aperture of a regular lens. The same goes with other shapes in reflected light scenarios.
Now here’s the same shot from an iPhone X, note the indiscriminate blur of the light. This modeling effort is why I’m glad that the adjustment slider proudly carries f-stop or aperture measurements. This is what this image would look like at a given aperture, rather than a 0-100 scale. It’s very well done and, because it’s modeled, it can be improved over time. My hope is that eventually, developers will be able to plug in their own numbers to “add lenses” to a user’s kit.
And an adjustable depth of focus isn’t just good for blurring, it’s also good for un-blurring. This portrait mode selfie placed my son in the blurry zone because it focused on my face. Sure, I could turn the portrait mode off on an iPhone X and get everything sharp, but now I can choose to “add” him to the in-focus area while still leaving the background blurry. Super cool feature I think is going to get a lot of use.
It’s also great for removing unwanted people or things from the background by cranking up the blur.
And yes, it works on non humans.
If you end up with an iPhone XS, I’d play with the feature a bunch to get used to what a super wide aperture lens feels like. When its open all the way to f1.4 (not the actual widest aperture of the lens btw, this is the virtual model we’re controlling) pretty much only the eyes should be in focus. Ears, shoulders, maybe even nose could be out of the focus area. It takes some getting used to but can produce dramatic results.
A 150% crop of a larger photo to show detail preservation.
Developers do have access to one new feature though, the segmentation mask. This is a more precise mask that aids in edge detailing, improving hair and fine line detail around the edges of a portrait subject. In my testing it has led to better handling of these transition areas and less clumsiness. It’s still not perfect, but it’s better. And third-party apps like Halide are already utilizing it. Halide’s co-creator, Sebastiaan de With, says they’re already seeing improvements in Halide with the segmentation map.
“Segmentation is the ability to classify sets of pixels into different categories,” says de With. “This is different than a “Hot dog, not a hot dog” problem, which just tells you whether a hot dog exists anywhere in the image. With segmentation, the goal is drawing an outline over just the hot dog. It’s an important topic with self driving cars, because it isn’t enough to tell you there’s a person somewhere in the image. It needs to know that person is directly in front of you. On devices that support it, we use PEM as the authority for what should stay in focus. We still use the classic method on old devices (anything earlier than iPhone 8), but the quality difference is huge.
The above is an example shot in Halide that shows the image, the depth map and the segmentation map.
In the example below, the middle black-and-white image is what was possible before iOS 12. Using a handful of rules like, “Where did the user tap in the image?” We constructed this matte to apply our blur effect. It’s no bad by any means, but compare it to the image on the right. For starters, it’s much higher resolution, which means the edges look natural.
My testing of portrait mode on the iPhone XS says that it is massively improved, but that there are still some very evident quirks that will lead to weirdness in some shots like wrong things made blurry and halos of light appearing around subjects. It’s also not quite aggressive enough on foreground objects — those should blur too but only sometimes do. But the quirks are overshadowed by the super cool addition of the adjustable background blur. If conditions are right it blows you away. But every once in a while you still get this sense like the Neural Engine just threw up its hands and shrugged.
Live preview of the depth control in the camera view is not in iOS 12 at the launch of the iPhone XS, but it will be coming in a future version of iOS 12 this fall.
I also shoot a huge amount of photos with the telephoto lens. It’s closer to what you’d consider to be a standard lens on a camera. The normal lens is really wide and once you acclimate to the telephoto you’re left wondering why you have a bunch of pictures of people in the middle of a ton of foreground and sky. If you haven’t already, I’d say try defaulting to 2x for a couple of weeks and see how you like your photos. For those tight conditions or really broad landscapes you can always drop it back to the wide. Because of this, any iPhone that doesn’t have a telephoto is a basic non-starter for me, which is going to be one of the limiters on people moving to iPhone XR from iPhone X, I believe. Even iPhone 8 Plus users who rely on the telephoto I believe will miss it if they don’t go to the XS.
But, man, Smart HDR is where it’s at
I’m going to say something now that is surely going to cause some Apple followers to snort, but it’s true. Here it is:
For a company as prone to hyperbole and Maximum Force Enthusiasm about its products, I think that they have dramatically undersold how much improved photos are from the iPhone X to the iPhone XS. It’s extreme, and it has to do with a technique Apple calls Smart HDR.
Smart HDR on the iPhone XR encompasses a bundle of techniques and technology including highlight recovery, rapid-firing the sensor, an OLED screen with much improved dynamic range and the Neural Engine/image signal processor combo. It’s now running faster sensors and offloading some of the work to the CPU, which enables firing off nearly two images for every one it used to in order to make sure that motion does not create ghosting in HDR images, it’s picking the sharpest image and merging the other frames into it in a smarter way and applying tone mapping that produces more even exposure and color in the roughest of lighting conditions.
iPhone XS shot, better range of tones, skintone and black point
iPhone X Shot, not a bad image at all, but blocking up of shadow detail, flatter skin tone and blue shift
Nearly every image you shoot on an iPhone XS or iPhone XS Max will have HDR applied to it. It does it so much that Apple has stopped labeling most images with HDR at all. There’s still a toggle to turn Smart HDR off if you wish, but by default it will trigger any time it feels it’s needed.
And that includes more types of shots that could not benefit from HDR before. Panoramic shots, for instance, as well as burst shots, low light photos and every frame of Live Photos is now processed.
The results for me have been massively improved quick snaps with no thought given to exposure or adjustments due to poor lighting. Your camera roll as a whole will just suddenly start looking like you’re a better picture taker, with no intervention from you. All of this is capped off by the fact that the OLED screens in the iPhone XS and XS Max have a significantly improved ability to display a range of color and brightness. So images will just plain look better on the wider gamut screen, which can display more of the P3 color space.
Under the hood
As far as Face ID goes, there has been no perceivable difference for me in speed or number of positives, but my facial model has been training on my iPhone X for a year. It’s starting fresh on iPhone XS. And I’ve always been lucky that Face ID has just worked for me most of the time. The gist of the improvements here are jumps in acquisition times and confirmation of the map to pattern match. There is also supposed to be improvements in off-angle recognition of your face, say when lying down or when your phone is flat on a desk. I tried a lot of different positions here and could never really definitively say that iPhone XS was better in this regard, though as I said above, it very likely takes training time to get it near the confidence levels that my iPhone X has stored away.
In terms of CPU performance the world’s first at-scale 7nm architecture has paid dividends. You can see from the iPhone XS benchmarks that it compares favorably to fast laptops and easily exceeds iPhone X performance.
The Neural Engine and better A12 chip has meant for better frame rates in intense games and AR, image searches, some small improvement in app launches. One easy way to demonstrate this is the video from the iScape app, captured on an iPhone X and an iPhone XS. You can see how jerky and FPS challenged the iPhone X is in a similar AR scenario. There is so much more overhead for AR experiences I know developers are going to be salivating for what they can do here.
The stereo sound is impressive, surpassingly decent separation for a phone and definitely louder. The tradeoff is that you get asymmetrical speaker grills so if that kind of thing annoys you you’re welcome.
Upgrade or no
Every other year for the iPhone I see and hear the same things — that the middle years are unimpressive and not worthy of upgrading. And I get it, money matters, phones are our primary computer and we want the best bang for our buck. This year, as I mentioned at the outset, the iPhone X has created its own little pocket of uncertainty by still feeling a bit ahead of its time.
I don’t kid myself into thinking that we’re going to have an honest discussion about whether you want to upgrade from the iPhone X to iPhone XS or not. You’re either going to do it because you want to or you’re not going to do it because you don’t feel it’s a big enough improvement.
And I think Apple is completely fine with that because iPhone XS really isn’t targeted at iPhone X users at all, it’s targeted at the millions of people who are not on a gesture-first device that has Face ID. I’ve never been one to recommend someone upgrade every year anyway. Every two years is more than fine for most folks — unless you want the best camera, then do it.
And, given that Apple’s fairly bold talk about making sure that iPhones last as long as they can, I think that it is well into the era where it is planning on having a massive installed user base that rents iPhones from it on a monthly or yearly or biennial period. And it doesn’t care whether those phones are on their first, second or third owner, because that user base will need for-pay services that Apple can provide. And it seems to be moving in that direction already, with phones as old as the five-year-old iPhone 5s still getting iOS updates.
With the iPhone XS, we might just be seeing the true beginning of the iPhone-as-a-service era.
For customer service, Ultimate.ai‘s thesis is it’s not humans or AI but humans and AI. The Helsinki- and Berlin-based startup has built an AI-powered suggestion engine that, once trained on clients’ data-sets, is able to provide real-time help to (human) staff dealing with customer queries via chat, email and social channels. So the AI layer is intended to make the humans behind the screens smarter and faster at responding to customer needs — as well as freeing them up from handling basic queries to focus on more complex issues.
AI-fuelled chatbots have fast become a very crowded market, with hundreds of so called ‘conversational AI’ startups all vying to serve the customer service cause.
Ultimate.ai stands out by merit of having focused on non-English language markets, says co-founder and CEO Reetu Kainulainen. This is a consequence of the business being founded in Finland, whose language belongs to a cluster of Eastern and Northern Eurasian languages that are plenty removed from English in sound and grammatical character.
“[We] started with one of the toughest languages in the world,” he tells TechCrunch. “With no available NLP [natural language processing] able to tackle Finnish, we had to build everything in house. To solve the problem, we leveraged state-of-the-art deep neural network technologies.
“Today, our proprietary deep learning algorithms enable us to learn the structure of any language by training on our clients’ customer service data. Core within this is our use of transfer learning, which we use to transfer knowledge between languages and customers, to provide a high-accuracy NLU engine. We grow more accurate the more clients we have and the more agents use our platform.”
Ultimate.ai was founded in November 2016 and launched its first product in summer 2017. It now has more than 25 enterprise clients, including the likes of Zalando, Telia and Finnair. It also touts partnerships with tech giants including SAP, Microsoft, Salesforce and Genesys — integrating with their Contact Center solutions.
“We partner with these players both technically (on client deployments) and commercially (via co-selling). We also list our solution on their Marketplaces,” he notes.
Up to taking in its first seed round now it had raised an angel round of €230k in March 2017, as well as relying on revenue generated by the product as soon as it launched.
The $1.3M seed round is co-led by Holtzbrinck Ventures and Maki.vc.
Kainulainen says one of the “key strengths” of Ultimate.ai’s approach to AI for text-based customer service touch-points is rapid set-up when it comes to ingesting a client’s historical customer logs to train the suggestion system.
“Our proprietary clustering algorithms automatically cluster our customer’s historical data (chat, email, knowledge base) to train our neural network. We can go from millions of lines of unstructured data into a trained deep neural network within a day,” he says.
“Alongside this, our state-of-the-art transfer learning algorithms can seed the AI with very limited data — we have deployed Contact Center automation for enterprise clients with as little as 500 lines of historical conversation.”
Ultimate.ai’s proprietary NLP achieves “state-of-the-art accuracy at 98.6%”, he claims.
It can also make use of what he dubs “semi-supervised learning” to further boost accuracy over time as agents use the tool.
“Finally, we leverage transfer learning to apply a single algorithmic model across all clients, scaling our learnings from client-to-client and constantly improving our solution,” he adds.
On the competitive front, it’s going up against the likes of IBM’s Watson AI. However Kainulainen argues that IBM’s manual tools — which he argues “require large onboarding projects and are limited in languages with no self-learning capabilities” — make that sort of manual approach to chatbot building “unsustainable in the long-term”.
He also contends that many rivals are saddled with “lengthy set-up and heavy maintenance requirements” which makes them “extortionately expensive”.
A closer competitor (in terms of approach) which he namechecks is TC Disrupt battlefield alum Digital Genius. But again they’ve got English language origins — so he flags that as a differentiating factor vs the proprietary NLP at the core of Ultimate.ai’s product (which he claims can handle any language).
“It is very difficult to scale out of English to other languages,” he argues. “It also uneconomical to rebuild your architecture to serve multi-language scenarios. Out of necessity, we have been language-agnostic since day one.”
“Our technology and team is tailored to the customer service problem; generic conversational AI tools cannot compete,” he adds. “Within this, we are a full package for enterprises. We provide a complete AI platform, from automation to augmentation, as well as omnichannel capabilities across Chat, Email and Social. Languages are also a key technical strength, enabling our clients to serve their customers wherever they may be.”
The multi-language architecture is not the only claimed differentiator, either.
Kainulainen points to the team’s mission as another key factor on that front, saying: “We want to transform how people work in customer service. It’s not about building a simple FAQ bot, it’s about deeply understanding how the division and the people work and building tools to empower them. For us, it’s not Superagent vs. Botman, it’s Superagent + Botman.”
So it’s not trying to suggest that AI should replace your entire customers service team but rather enhance your in house humans.
Asked what the AI can’t do well, he says this boils down to interactions that are transactional vs relational — with the former category meshing well with automation, but the latter (aka interactions that require emotional engagement and/or complex thought) definitely not something to attempt to automate away.
“Transactional cases are mechanical and AI is good at mechanical. The customer knows what they want (a specific query or action) and so can frame their request clearly. It’s a simple, in-and-out case. Full automation can be powerful here,” he says. “Relational cases are more frequent, more human and more complex. They can require empathy, persuasion and complex thought. Sometimes a customer doesn’t know what the problem is — “it’s just not working”.
“Other times are sales opportunities, which businesses definitely don’t want to automate away (AI isn’t great at persuasion). And some specific industries, e.g. emergency services, see the human response as so vital that they refuse automation entirely. In all of these situations, AI which augments people, rather than replaces, is most effective.
“We see work in customer service being transformed over the next decade. As automation of simple requests becomes the status-quo, businesses will increasingly differentiate through the quality of their human-touch. Customer service will become less labour intensive, higher skilled work. We try and imagine what tools will power this workforce of tomorrow and build them, today.”
On the ethics front, he says customers are always told when they are transferred to a human agent — though that agent will still be receiving AI support (i.e. in the form of suggested replies to help “bolster their speed and quality”) behind the scenes.
Ultimate.ai’s customers define cases they’d prefer an agent to handle — for instance where there may be a sales opportunity.
“In these cases, the AI may gather some pre-qualifying customer information to speed up the agent handle time. Human agents are also brought in for complex cases where the AI has had difficulty understanding the customer query, based on a set confidence threshold,” he adds.
Kainulainen says the seed funding will be used to enhance the scalability of the product, with investments going into its AI clustering system.
The team will also be targeting underserved language markets to chase scale — “focusing heavily on the Nordics and DACH [Germany, Austria, Switzerland]”.
“We are building out our teams across Berlin and Helsinki. We will be working closely with our partners – SAP, Microsoft, Salesforce and Genesys — to further this vision,” he adds.
Commenting on the funding in a statement, Jasper Masemann, investment manager at Holtzbrinck Ventures, added: “The customer service industry is a huge market and one of the world’s largest employers. Ultimate.ai addresses the main industry challenges of inefficiency, quality control and high people turnover with latest advancements in deep learning and human machine hybrid models. The results and customer feedback are the best I have seen, which makes me very confident the team can become a forerunner in this space.”
After pilot programs in Atlanta, Dallas, Los Angeles and Chicago were “overwhelmingly positive,” per Instacart CEO Apoorva Mehta’s statement, the grocery delivery service has decided to expand its partnership with U.S. grocer ALDI.
As of Thanksgiving, Instacart users in 35 states and 5,000 ZIP codes will be able to fill their virtual carts with ALDI groceries.
ALDI is in the process of completing a $5 billion remodel, with plans to expand its store count to 2,500 in the next four years. The grocer initially entered into an agreement with Instacart last fall.
The company, valued at $4.3 billion, is available in 15,000 different stores in more than 4,000 cities, or 70 percent of all U.S. households. Instacart’s goal, according to a 2017 company announcement, is to reach 80 percent of American homes by the end of this year.