Category: UNCATEGORIZED

16 Aug 2019

Samsung Galaxy Note 10+ review

It’s true, you’ve got the Galaxy Note to thank for your big phone. When the device hit the scene at IFA 2011, large screens were still a punchline. That same year, Steve Jobs famously joked about phones with screens larger than four inches, telling a crowd of reporters, “nobody’s going to buy that.”

In 2019, the average screen size hovers around 5.5 inches. That’s a touch larger than the original Note’s 5.3 inches — a size that was pretty widely mocked by much of the industry press at the time. Of course, much of the mainstreaming of larger phones comes courtesy of a much improved screen to body ratio, another place where Samsung has continued to lead the way.

Samsung Galaxy Note10

In some sense, the Note has been doomed by its own success. As the rest of the industry caught up, the line blended into the background. Samsung didn’t do the product any favors by dropping the pretense of distinction between the Note and its Galaxy S line.

Ultimately, the two products served as an opportunity to have a six-month refresh cycle for its flagships. Samsung, of course, has been hit with the same sort of malaise as the rest of the industry. The smartphone market isn’t the unstoppable machine it appeared to be two or three years back.

Like the rest of the industry, the company painted itself into a corner with the smartphone race, creating flagships good enough to convince users to hold onto them for an extra year or two, greatly slowing the upgrade cycle in the process. Ever-inflating prices have also been a part of smartphone sales stagnation — something Samsung and the Note are as guilty of as any.

So what’s a poor smartphone manufacturer to do? The Note 10 represents baby steps. As it did with the S line recently, Samsung is now offering two models. The base Note 10 represents a rare step backward in terms of screen size, shrinking down slightly from 6.4 to 6.3 inches, while reducing resolution from Quad HD to Full HD.

The seemingly regressive step lets Samsung come in a bit under last year’s jaw dropping $1,000. The new Note is only $50 cheaper, but moving from four to three figures may have a positive psychological effect for wary buyers. While the slightly smaller screen coupled with a better screen to body ratio means a device that’s surprisingly slim.

Samsung Galaxy Note10

If anything, the Note 10+ feels like the true successor to the Note line. The baseline device could have just as well been labeled the Note 10 Lite. That’s something Samsung is keenly aware of, as it targets first-time Note users with the 10 and true believers with the 10+. In both cases, Samsung is faced with the same task as the rest of the industry: offering a compelling reason for users to upgrade.

Earlier this week, a Note 9 owner asked me whether the new device warrants an upgrade. The answer is, of course, no. The pace of smartphone innovation has slowed, even as prices have risen. Honestly, the 10 doesn’t really offer that many compelling reasons to upgrade from the Note 8.

That’s not a slight against Samsung or the Note, per se. If anything, it’s a reflection on the fact that these phones are quite good — and have been for a while. Anecdotally, industry excitement around these devices has been tapering for a while now, and the device’s launch in the midst of the doldrums of August likely didn’t help much.

[gallery ids="1865978,1865980,1865979,1865983,1865982,1865990,1866000,1866005,1866004"]

The past few years have seen smartphones transform from coveted, bleeding-edge luxury to necessity. The good news to that end, however, is that the Note continues to be among the best devices out there.

The common refrain in the earliest days of the phablet was the inability to wrap one’s fingers around the device. It’s a pragmatic issue. Certainly you don’t want to use a phone day to day that’s impossible to hold. But Samsung’s remarkable job of improving screen to body ratio continues here. In fact, the 6.8-inch Note 10+ has roughly the same footprint as the 6.4-inch Note 9.

The issue will still persist for those with smaller hands — though thankfully Samsung’s got a solution for them in the Note 10. For the rest of us, the Note 10+ is easily held in one hand and slipped in and out of pants pockets. I realize these seem like weird things to say at this point, but I assure you they were legitimate concerns in the earliest days of the phablet, when these things were giant hunks of plastic and glass.

Samsung Galaxy Note10

Samsung’s curved display once again does much of the heavy lifting here, allowing the screen to stretch nearly from side to side with only a little bezel at the edge. Up top is a hole-punch camera — that’s “Infinity O” to you. Those with keen eyes no doubt immediately noticed that Samsung has dropped the dual selfie camera here, moving toward the more popular hole-punch camera.

The company’s reasoning for this was both aesthetic and, apparently, practical. The company moved back down to a single camera for the front (10 megapixel), using similar reasoning as Google’s single rear-facing camera on the Pixel: software has greatly improved what companies can do with a single lens. That’s certainly the case to a degree, and a strong case can be made for the selfie camera, which we generally require less of than the rear-facing array.

The company’s gone increasingly minimalist with the design language — something I appreciate. Over the years, as the smartphone has increasingly become a day to day utility, the product’s design has increasingly gotten out of its own way. The front and back are both made of a curved Gorilla Glass that butts up against a thin metal form with a total thickness of 7.9 millimeters.

On certain smooth surfaces like glass, you’ll occasionally find the device gliding slightly. I’d say the chances of dropping it are pretty decent with its frictionless design language, so you’re going to want to get a case for your $1,000 phone. Before you do, admire that color scheme on the back. There are four choices in all. Like the rest of the press, we ended up with Aura Glow.

Samsung Galaxy Note 10

It features a lovely, prismatic effect when light hits it. It’s proven a bit tricky to photograph, honestly. It’s also a fingerprint magnet, but these are the prices we pay to have the prettiest phone on the block.

One of the interesting footnotes here is how much the design of the 10 will be defined by what the device lost. There are two missing pieces here — both of which are a kind of concession from Samsung for different reasons. And for different reasons, both feel inevitable.

Samsung Galaxy Note10

The headphone jack is, of course, the biggie. Samsung kicked and screamed on that one, holding onto the 3.5mm with dear life and roundly mocking the competition (read: Apple) at every turn. The company must have known it was a matter of time, even before the iPhone dropped the port three years ago.

Courage.

Samsung Galaxy Note10

Samsung glossed over the end of the jack (and apparently unlisted its Apple-mocking ads in the process) during the Note’s launch event. It was a stark contrast from a briefing we got around the device’s announcement, where the company’s reps spent significantly more time justifying the move. They know us well enough to know that we’d spend a little time taking the piss out of the company after three years of it making the once ubiquitous port a feature. All’s fair in love and port. And honestly, it was mostly just some good-natured ribbing. Welcome to the club, Samsung.

As for why Samsung did it now, the answer seems to be two-fold. The first is a kind of critical mass in Bluetooth headset usage. Allow me to quote myself from a few weeks back:

The tipping point, it says, came when its internal metrics showed that a majority of users on its flagship devices (the S and Note lines) moved to Bluetooth streaming. The company says the number is now in excess of 70% of users.

Samsung Galaxy Note10

Also, as we’re all abundantly aware, the company put its big battery ambitions on hold for a bit, as it dealt with…more burning problems. A couple of recalls, a humble press release and an eight-point battery check later, and batteries are getting bigger again. There’s a 3,500mAh on the Note 10 and a 4,300mAh on the 10+. I’m happy to report that the latter got me through a full day plus three hours on a charge. Not bad, given all of the music and videos I subjected it to in that time.

There’s no USB-C dongle in-box. The rumors got that one wrong. You can pick up a Samsung-branded adapter for $15, or get one for much cheaper elsewhere. There is, however, a pair of AKG USB-C headphones in-box. I’ve said this before and I’ll say it again: Samsung doesn’t get enough credit for its free headphones. I’ve been known to use the pairs with other devices. They’re not the greatest the world, but they’re better sounding and more comfortable than what a lot of other companies offer in-box.

Obviously the standard no headphone jack things apply here. You can’t use the wired headphones and charge at the same time (unless you go wireless). You know the deal.

Samsung Galaxy Note10

The other missing piece here is the Bixby button. I’m sure there are a handful of folks out there who will bemoan its loss, but that’s almost certainly a minority of the minority here. Since the button was first introduced, folks were asking for the ability to remap it. Samsung finally relented on that front, and with the Note 10, it drops the button altogether.

Thus far the smart assistant has been a disappointment. That’s due in no small part to a late launch compared to the likes of Siri, Alexa and Assistant, coupled with a general lack of capability at launch. In Samsung’s defense, the company’s been working to fix that with some pretty massive investment and a big push to court developers. There’s hope for Bixby yet, but a majority of users weren’t eager to have the assistant thrust upon them.

Instead, the power button has been shifted to the left of the device, just under the volume rocker. I preferred having it on the other side, especially for certain functions like screenshotting (something, granted, I do much more than the average user when reviewing a phone). That’s a pretty small quibble, of course.

Samsung Galaxy Note10

Bixby can now be quickly accessed by holding down the power button. Handily, Samsung still lets you reassign the function there, if you really want Bixby out of your life. You can also hold down to get the power off menu or double press to launch Bixby or a third-party app (I opted for Spotify, probably my most used these days), though not a different assistant.

Imaging, meanwhile, is something Samsung’s been doing for a long time. The past several generations of S and Note devices have had great camera systems, and it continues to be the main point of improvement. It’s also one of few points of distinction between the 10 and 10+, aside from size.

Samsung Galaxy Note10

The Note 10+ has four, count ’em, four rear-facing cameras. They are as follows:

  • Ultra Wide: 16 megapixel
  • Wide: 12 megapixel
  • Telephoto: 12 megapixel
  • DepthVision

Samsung Galaxy Note10

That last one is only on the plus. It’s comprised of two little circles to the right of the primary camera array and just below the flash. We’ll get to that in a second.

Samsung Galaxy Note 10

The main camera array continues to be one of the best in mobile. The inclusion of telephoto and ultra-wide lenses allow for a wide range of different shots, and the hardware coupled with machine learning makes it a lot more difficult to take a bad photo (though believe me, it’s still possible).

[gallery ids="1869716,1869715,1869720,1869718,1869719"]

The live focus feature (Portrait mode, essentially) comes to video, with four different filters, including Color Point, which makes everything but the subject black and white.

Samsung Galaxy Note 10

Samsung’s also brought a very simple video editor into the mix here, which is nice on the fly. You can edit the length of clips, splice in other clips, add subtitles and captions and add filters and music. It’s pretty beefy for something baked directly into the camera app, and one of the better uses I’ve found for the S Pen.

Samsung Galaxy Note 10

Note 10+ with Super Steady (left), iPhone XS (right)

Ditto for the improved Super Steady offering, which smooths out shaky video, including Hyperlapse mode, where handshakes are a big issue. It works well, but you do lose access to other features, including zoom. For that reason, it’s off by default and should be used relatively sparingly.

Samsung Galaxy Note 10

Note 10+ (left), iPhone XS (right)

Zoom-on Mic is a clever addition, as well. While shooting video, pinch-zooming on something will amplify the noise from that area. I’ve been playing around with it in this cafe. It’s interesting, but less than perfect.

[gallery ids="1869186,1869980,1869975,1869974,1869973,1869725,1869322,1869185,1869184,1869190"]

Zooming into something doesn’t exactly cancel out ambient noise from outside of the frame. Everything still gets amplified in the process and, like digital picture zoom, a lot of noise gets added in the process. Those hoping for a kind of spy microphone, I’m sorry/happy to report that this definitely is not that.

Screen Shot 2019 08 16 at 5.43.43 PM 2

The DepthVision Camera is also pretty limited as I write this. If anything, it’s Samsung’s attempt to brace for a future when things like augmented reality will (theoretically) play a much larger role in our mobile computing. In a conversation I had with the company ahead of launch, they suggested that a lot of the camera’s AR functions will fall in the hands of developers.

For now, Quick Measure is the one practical use. The app is a lot like Apple’s more simply titled Measure. Fire it up, move the camera around to get a lay of the land and it will measure nearby objects for you. An interesting showcase for AR potential? Sure. Earth shattering? Naw. It also seems to be a bit of a battery drain, sucking up the last few bits of juice as I was running it down.

3D Scanner, on the other hand, got by far the biggest applause line of the Note event. And, indeed, it’s impressive. In the stage demo, a Samsung employee scanned a stuffed pink beaver (I’m not making this up), created a 3D image and animated it using an associate’ movements. Practical? Not really. Cool? Definitely.

It was, however, not available at press time. Hopefully it proves to be more than vaporware, especially if that demo helped push some viewers over to the 10+. Without it, there’s just not a lot of use for the depth camera at the moment.

Samsung Galaxy Note 10

There’s also AR Doodle, which fills a similar spot as much of the company’s AR offerings. It’s kind of fun, but again, not particularly useful. You’ll likely end up playing with it for a few minutes and forget about it entirely. Such is life.

The feature is built into the camera app, using depth sensing to orient live drawings. With the stylus you can draw in space or doodle on people’s faces. It’s neat, the AR works okay and I was bored with it in about three minutes. Like Quick Measure, the feature is as much a proof of concept as anything. But that’s always been a part of Samsung’s kitchen-sink approach — some combination of useful and silly.

ezgif 1 f1b04b8e2ef9

That said, points to Samsung for continuing to de-creepify AR Emojis. Those have moved firmly away from the uncanny valley into something more cartoony/adorable. Less ironic usage will surely follow.

Asked about the key differences between the S and Note lines, Samsung’s response was simple: the S Pen. Otherwise, the lines are relatively interchangeable.

Samsung Galaxy Note10

Samsung’s return of the stylus didn’t catch on for handsets quite like the phablet form factor. They’ve made a pretty significant comeback for tablets, but the Note remains fairly singular when it comes to the S Pen. I’ve never been a big user myself, but those who like it swear by it. It’s one of those things like the ThinkPad pointing stick or BlackBerry scroll wheel.

Like the phone itself, the peripheral has been streamlined with a unibody design. Samsung also continues to add capabilities. It can be used to control music, advance slideshows and snap photos. None of that is likely to convince S Pen skeptics (I prefer using the buttons on the included headphones for music control, for example), but more versatility is generally a good thing.

If anything is going to convince people to pick up the S Pen this time out, it’s the improved handwriting recognition. That’s pretty impressive. It was even able to decipher my awful chicken scratch.

Note 10

You get the same sort of bleeding-edge specs here you’ve come to expect from Samsung’s flagships. The 10+ gets you a baseline 256GB of storage (upgradable to 512), coupled with a beefy 12GB of RAM (the regular Note is a still good 8GB/256GB). The 5G version sports the same numbers and battery (likely making its total life a bit shorter per charge). That’s a shift from the S10, whose 5G version was specced out like crazy. Likely Samsung is bracing for 5G to become less of a novelty in the next year or so.

The new Note also benefits from other recent additions, like the in-display fingerprint reader and wireless power sharing. Both are nice additions, but neither is likely enough to warrant an immediate upgrade.

Samsung Galaxy Note10

Once again, that’s not an indictment of Samsung, so much as a reflection of where we are in the life cycle of a mature smartphone industry. The Note 10+ is another good addition to one of the leading smartphone lines. It succeeds as both a productivity device (thanks to additions like DeX and added cross-platform functionality with Windows 10) and an everyday handset.

There’s not enough on-board to really recommend an upgrade from the Note 8 or 9 — especially at that $1,099 price. People are holding onto their devices for longer, and for good reason (as detailed above). But if you need a new phone, are looking for something big and flashy and are willing to splurge, the Note continues to be the one to beat.

[gallery ids="1869169,1869168,1869167,1869166,1869165,1869164,1869163,1869162,1869161,1869160,1869159,1869158,1869157,1869156,1869155,1869154,1869153,1869152"]

16 Aug 2019

Flexible stick-on sensors could wirelessly monitor your sweat and pulse

As people strive ever harder to minutely quantify every action they do, the sensors that monitor those actions are growing lighter and less invasive. Two prototype sensors from crosstown rivals Stanford and Berkeley stick right to the skin and provide a wealth of phsyiological data.

Stanford’s stretchy wireless “BodyNet” isn’t just flexible in order to survive being worn on the shifting surface of the body; that flexing is where its data comes from.

The sensor is made of metallic ink laid on top of a flexible material like that in an adhesive bandage. But unlike phones and smart watches, which use tiny accelerometers or optical tricks to track the body, this system relies on how it is itself stretched and compressed. These movements cause tiny changes in how electricity passes through the ink, changes that are relayed to a processor nearby.

Naturally if one is placed on a joint, as some of these electronic stickers were, it can report back whether and how much that joint has been flexed. But the system is sensitive enough that it can also detect the slight changes the skin experiences during each heartbeat, or the broader changes that accompany breathing.

The problem comes when you have to get that signal off the skin. Using a wire is annoying and definitely very ’90s. But antennas don’t work well when they’re flexed in weird directions — efficiency drops off a cliff, and there’s very little power to begin with — the skin sensor is powered by harvesting RFID signals, a technique that renders very little in the way of voltage.

bodynet sticker and receiver

The second part of their work, then, and the part that is clearly most in need of further improvement and miniaturization, is the receiver, which collects and re-transmits the sensor’s signal to a phone or other device. Although they managed to create a unit that’s light enough to be clipped to clothes, it’s still not the kind of thing you’d want to wear to the gym.

The good news is that’s an engineering and design limitation, not a theoretical one — so a couple years of work and progress on the electronics front and they could have a much more attractive system.

“We think one day it will be possible to create a full-body skin-sensor array to collect physiological data without interfering with a person’s normal behavior,” Stanford professor Zhenan Bao in a news release.

Over at Cal is a project in a similar domain that’s working to get from prototype to production. Researchers there have been working on a sweat monitor for a few years that could detect a number of physiological factors.

SensorOnForehead BN

Normally you’d just collect sweat every 15 minutes or so and analyze each batch separately. But that doesn’t really give you very good temporal resolution — what if you want to know how the sweat changes minute by minute or less? By putting the sweat collection and analysis systems together right on the skin, you can do just that.

While the sensor has  been in the works for a while, it’s only recently that the team has started moving towards user testing at scale to see what exactly sweat measurements have to offer.

RollToRoll BN 768x960“The goal of the project is not just to make the sensors but start to do many subject studies and see what sweat tells us — I always say ‘decoding’ sweat composition. For that we need sensors that are reliable, reproducible, and that we can fabricate to scale so that we can put multiple sensors in different spots of the body and put them on many subjects,” explained Ali Javey, Berkeley professor and head of the project.

As anyone who’s working in hardware will tell you, going from a hand-built prototype to a mass-produced model is a huge challenge. So the Berkeley team tapped their Finnish friends at VTT Technical Research Center, who make a specialty of roll-to-roll printing.

For flat, relatively simple electronics, roll-to-roll is a great technique, essentially printing the sensors right onto a flexible plastic substrate that can then simply be cut to size. This way they can make hundreds or thousands of the sensors quickly and cheaply, making them much simpler to deploy at arbitrary scales.

These are far from the only flexible or skin-mounted electronics projects out there, but it’s clear that we’re approaching the point when they begin to leave the lab and head out to hospitals, gyms, and homes.

The paper describing Stanford’s flexible sensor appeared this week in the journal Nature Electronics, while Berkeley’s sweat tracker was in Science Advances.

16 Aug 2019

ClearBrain launches analytics tools focused on connecting cause and effect

Businesses need to understand cause-and-effect: If someone did X and it increased sales, or they did Y and it hurt sales. That’s why many of them turn to analytics — but Bilal Mahmood, co-founder and CEO of ClearBrain, said existing analytics platforms can’t answer that question accurately.

“Every analytics platform today is still based on a fundamental correlation model,” Mahmood said. It’s the classic correlation-versus-causation problem — you can use the data to suggest that an action and a result are related, but you can’t draw a direct cause-and-effect relationship.

That’s the problem that ClearBrain is trying to solve with its new “causal analytics” tool. As the company put it in a blog post, “Our goal was to automate this process [of running statistical studies] and build the first large-scale causal inference engine to allow growth teams to measure the causal effect of every action.”

You can read the post for (many) more details, but the gist is that Mahmood and his team claim that they can accurate draw accurate causal relationships where others can’t.

ClearBrain analytics screenshot

The idea is to use this in conjunction with A/B testing — customers look at the data to prioritize what to test next, and to make estimates about the impact of things that can’t be tested. Otherwise, Mahmood said, “If you wanted to measure the actual impact of every variable on your website and your app — the actual impact it has on conversation — it could take you years.”

When I wrote about ClearBrian last year, it was using artificial intelligence to improve ad targeting, but Mahmood said the company built the new analytics technology in response to customer demand: “People didn’t just want to know who was going to convert, they wanted to know why, and what caused them to do so.”

The causal analytics tool is currently available to early access users, with plans for a full launch in October. Mahmood said there will be a number of pricing tiers, but they’ll be structured to make the product free for many startups.

In addition to launching the analytics tool in early access, ClearBrain also announced this week that it’s raised an additional $2 million in funding from Harrison Metal and Menlo Ventures.

16 Aug 2019

L.A.-based Upfront Ventures has two new general partners, bringing its GP count to eight

Upfront Ventures, the 23-year-old, L.A.-based venture capital firm, is gearing up for far more deal-making.

In addition to filing paperwork with the SEC this summer to raise its third growth-stage investment fund (it is also investing a $400 million early-stage fund and probably announcing another soon), the firm just added two new general partners to its line-up of investors.

One of them, Michael Carney, joined Upfront as a principal in 2015, after working as an editor at the news site Pandodaily, and, before that, working as an investor and analyst at a boutique merchant bank called Worldvest.

The firm’s second new general partner is Aditi Maliwal, who has also circled in and out of investing before, including stints as an associate with Crosslink Capital and, more recently, spending several years with Google, where Maliwal worked in corporate development before becoming a project manager.

We talked with both this week to congratulate them, as well as to learn more about where they’ll be shopping — and from where.

For her part, Maliwal, who begins work at Upfront next month, says the idea is for her to eventually open a San Francisco office, though for now, she’ll be operating from the Bay Area out of a space that’s yet to be determined and spending every Monday or every other Monday down in L.A. with the rest of the team.

She got to know Upfront through another general partner, Kara Nortman, who joined Upfront in 2014 and “we’d continue to see each other at events. I also have family ties in L.A. so would see her there.” Maliwal says she also says she would observe on her trips that the “ecosystem in L.A. has really grown from 2014 to where it is today. I think the Bay Area continues to see how important it is, too.”

As for becoming an investor again, Maliwal says she was always interested in becoming a VC, thanks in part to a class taught at Stanford by renowned venture capitalist Heidi Roizen VC that inspired her. She says spending time with founders in her husband’s business school class at Stanford this past year whet her appetite anew. “There are four or five companies I’m close to and they’re good friends and when I was up at 11 pm working on a company idea with one of them earlier this year, I just realized that this is what gives me a lot of energy and this is a space I want to [get involved in again].”

What she’ll be focusing on, she says it will mostly likely be business to business to consumer models, as well as SaaS applications, fintech, and, when the opportunity arises, consumer products. More broadly speaking, says Maliwal, she hopes to serve as a bridge for Bay Area startups looking for a foothold in the L.A. market and vice versa.

Meanwhile, Carney is, and will remain, more focused on later-stage bets that Upfront funded early on and whose success the firm wants to ensure (to the extent that any firm can). Understandably, he sounds excited — still — about the work.

“In 2012, [when I was at Pandodaily] L.A. was crossing and inflection point, with a number of second- and third-time founders coming out of later-stage marquee companies. When I joined Upfront, it felt similar. It was an incredible platform, it was a year or two after the firm was rebranded [from GRP Ventures] and Kara had been there less than a year and [fellow general partner] Greg [Bettinelli] had been there maybe two years. The team was kind of maturing and I feel lucky to join when I did.”

Carney suggests the opportunities have only grown stronger, in his view of the later-stage world. “We’re definitely seeing [greater bifurcation] between the haves and have nots, with company that can break out as clear leaders tending to have access to larger amounts of capital than in past years. For the best of the best, the conditions remain as favorable as possible, while it’s gotten harder for companies to raise capital that fail to hit those growth rates, even in good times.”

Being able to recruit employees from roles at top companies in the Bay Area is just one reason solid L.A. companies have attained more momentum. “I think that owes to the maturation of the L.A. ecosystem. I think people are drawn to L.A. because Silicon Valley, for all its incredible success in the tech sector, is an industry town and L.A. has a more diverse economy and ecosystem. But also, five years ago, people would ask themselves, ‘If this new role [in L.A.] doesn’t work out, what do I do next?’ And I think the answer to that question is much clearer and more positive today.”

According to Upfront, 40 percent of its initial checks are written to companies based in L.A., though it has bets in other parts of the U.S. and world. Some of the best-known deals in its current portfolio include the scooter company Bird, the sneaker marketplace GOAT, and the online resale store ThredUp. Upfront was also an investor in Ring, the smart doorbell company acquired early last year by Amazon for $1 billion.

In addition to Maliwal, Carney, Nortman and Bettinelli, the firm is managed by general partners Kobie Fuller, Kevin Zhang, Mark Suster and founder Yves Sisteron.

16 Aug 2019

Daily Crunch: Cloudflare is going public

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Cloudflare files for initial public offering

The web infrastructure company has recently found itself at the center of political debates around some of its customers, including social media networks like 8chan and racist media companies like the Daily Stormer. In fact, the company went so far as to cite 8chan as a risk factor in its public offering documents.

Even so, the company will likely be worth billions when it starts trading on the market. And we have to point out that Cloudflare actually made its debut on TechCrunch’s Battlefield stage back in 2010.

2. Apple is suing Corellium

Corellium allows customers to create and interact with virtual iOS devices — a software iPhone, for example, running actual iOS firmware, all within the browser. Apple says this is copyright infringement.

3. YouTube shuts down music companies’ use of manual copyright claims to steal creator revenue

Going forward, copyright owners will no longer be able to monetize creator videos with very short or unintentional uses of music via YouTube’s “Manual Claiming” tool. Instead, they can choose to prevent the other party from monetizing the video, or they can block the content.

sharechat team

4. Twitter leads $100M round in top Indian regional social media platform ShareChat

ShareChat serves 60 million users each month in 15 regional languages — but not English, which the company says is a deliberate choice.

5. Apple, Samsung continue growth as North American wearables market hits $2B

The numbers aren’t exactly earth-shattering, but they show significant growth for a category that felt almost dead in the water a year or two back.

6. Local governments are forcing the scooter industry to grow up fast

For scooter startups, city regulations can make or break their businesses across nearly every aspect of operations, especially in two key areas: ridership growth and the ability to attract investor dollars. (Extra Crunch membership required.)

7. Microsoft Azure CTO Mark Russinovich will join us for TC Sessions: Enterprise on September 5

In a fireside chat, we’ll ask Russinovich about what Microsoft is doing to reduce the complexity of moving legacy systems to the cloud, and how enterprises can maximize their current cloud investments.

16 Aug 2019

How ‘ghost work’ in Silicon Valley pressures the workforce, with Mary Gray

The phrase “pull yourself up by your own bootstraps” was originally meant sarcastically.

It’s not actually physically possible to do — especially while wearing Allbirds and having just fallen off a Bird scooter in downtown San Francisco, but I should get to my point.

This week, Ken Cuccinelli, the acting Director of the United States Citizenship and Immigrant Services Office, repeatedly referred to the notion of bootstraps in announcing shifts in immigration policy, even going so far as to change the words to Emma Lazarus’s famous poem “The New Colossus:” no longer “give me your tired, your poor, your huddled masses yearning to breathe free,” but “give me your tired and your poor who can stand on their own two feet, and who will not become a public charge.”

We’ve come to expect “alternative facts” from this administration, but who could have foreseen alternative poems?

Still, the concept of ‘bootstrapping’ is far from limited to the rhetorical territory of the welfare state and social safety net. It’s also a favorite term of art in Silicon Valley tech and venture capital circles: see for example this excellent (and scary) recent piece by my editor Danny Crichton, in which young VC firms attempt to overcome a lack of the startup capital that is essential to their business model by creating, as perhaps an even more essential feature of their model, impossible working conditions for most everyone involved. Often with predictably disastrous results.

It is in this context of unrealistic expectations about people’s labor, that I want to introduce my most recent interviewee in this series of in-depth conversations about ethics and technology.

Mary L. Gray is a Fellow at Harvard University’s Berkman Klein Center for Internet and Society and a Senior Researcher at Microsoft Research. One of the world’s leading experts in the emerging field of ethics in AI, Mary is also an anthropologist who maintains a faculty position at Indiana University. With her co-author Siddharth Suri (a computer scientist), Gray coined the term “ghost work,” as in the title of their extraordinarily important 2019 book, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. 

a mathiowetz crop 2 768x960

Image via Mary L. Gray / Ghostwork / Adrianne Mathiowetz Photography

Ghost Work is a name for a rising new category of employment that involves people scheduling, managing, shipping, billing, etc. “through some combination of an application programming interface, APIs, the internet and maybe a sprinkle of artificial intelligence,” Gray told me earlier this summer. But what really distinguishes ghost work (and makes Mary’s scholarship around it so important) is the way it is presented and sold to the end consumer as artificial intelligence and the magic of computation.

In other words, just as we have long enjoyed telling ourselves that it’s possible to hoist ourselves up in life without help from anyone else (I like to think anyone who talks seriously about “bootstrapping” should be legally required to rephrase as “raising oneself from infancy”), we now attempt to convince ourselves and others that it’s possible, at scale, to get computers and robots to do work that only humans can actually do.

Ghost Work’s purpose, as I understand it, is to elevate the value of what the computers are doing (a minority of the work) and make us forget, as much as possible, about the actual messy human beings contributing to the services we use. Well, except for the founders, and maybe the occasional COO.

Facebook now has far more employees than Harvard has students, but many of us still talk about it as if it were little more than Mark Zuckerberg, Cheryl Sandberg, and a bunch of circuit boards.

But if working people are supposed to be ghosts, then when they speak up or otherwise make themselves visible, they are “haunting” us. And maybe it can be haunting to be reminded that you didn’t “bootstrap” yourself to billions or even to hundreds of thousands of dollars of net worth.

Sure, you worked hard. Sure, your circumstances may well have stunk. Most people’s do.

But none of us rise without help, without cooperation, without goodwill, both from those who look and think like us and those who do not. Not to mention dumb luck, even if only our incredible good fortune of being born with a relatively healthy mind and body, in a position to learn and grow, here on this planet, fourteen billion years or so after the Big Bang.

I’ll now turn to the conversation I recently had with Gray, which turned out to be surprisingly more hopeful than perhaps this introduction has made it seem.

Greg Epstein: One of the most central and least understood features of ghost work is the way it revolves around people constantly making themselves available to do it.

Mary Gray: Yes, [What Siddarth Suri and I call ghost work] values having a supply of people available, literally on demand. Their contributions are collective contributions.

It’s not one person you’re hiring to take you to the airport every day, or to confirm the identity of the driver, or to clean that data set. Unless we’re valuing that availability of a person, to participate in the moment of need, it can quickly slip into ghost work conditions.

16 Aug 2019

Lumineye helps first responders identify people through walls

Any first responder knows that situational awareness is key. In domestic violence disputes, hostage rescue, or human trafficking situations, first responders often need help determining where humans are behind closed doors.

That’s why Megan Lacy, Corbin Hennen and Rob Kleffner developed Lumineye, a 3D printed radar device that uses signal analysis software to differentiate moving and breathing humans from other objects, through walls.

Lumineye uses pulse radar technology that works like echolocation (how bats and dolphins communicate). It sends signals and listens for how long it takes for a pulse to bounce back. The software analyzes these pulses to determine the approximate size, range and movement characteristics of a signal.

On the software side, Lumineye’s app that will tell a user how far away a person is when they’re moving and breathing. It’s one dimensional, so it doesn’t tell the user whether the subject is to the right or left. But the device can detect humans out to 50 feet in open air, and that range decreases depending upon the materials placed in between like drywall, brick or concrete.

One scenario the team gave to describe the advantages of using Lumineye was the instance of hostage rescue. In this type of situation, it’s crucial for first responders to know how many people are in a room and how far away they are from one another. That’s where the use of multiple devices and triangulation from something like Lumineye could change a responding team’s tactical rescue approach.

Machines that currently exist to make these kind of detections are heavy and cumbersome. The team behind Lumineye was inspired to manufacture a more portable option that won’t weigh teams down during longer emergency response situations that can sometimes last for up to 12 hours or overnight. The prototype combines the detection hardware with an ordinary smartphone. It’s about 10 x 5 inches and weighs 1.5 pounds.

Lumineye wants to grow out its functionality to become more of a ubiquitous device. The team of four is planning to continue manufacturing the device and selling it directly to customers.

 

Lumineye Device BreathingMode

Lumineye’s device can detect humans through walls using radio frequencies

Lumineye has just started its pilot programs, and recently spent a Saturday at a FEMA event testing out the the device’s ability to detect people covered in rubble piles. The company was born out of the Boise Idaho cohort of Stanford’s Hacking4Defense program, a course meant to connect Silicon Valley innovations with the U.S. Department of Defense and Intelligence Community. The Idaho-based startup is graduating from Y Combinator’s Summer 2019 class.

Lumineye TeamPicture 1

Megan Lacy, Corbin Hennen and Rob Kleffner

16 Aug 2019

Slack co-founder Cal Henderson and Spark Capital’s Megan Quinn are coming to Disrupt SF

If there was one company at the top of everyone’s mind this year, it was Slack.

The now-ubiquitous workplace messaging tool began trading on the New York Stock Exchange in June after taking an unusual route to the public markets known as a direct listing. Slack bypassed the typical IPO process in favor of putting its current stock on to the NYSE without doing an additional raise or bringing on underwriter banking partners.

Slack co-founder and chief technology officer Cal Henderson and Slack investor and Spark Capital general partner Megan Quinn will join us on stage at TechCrunch Disrupt SF to give a behind the scenes look at Slack’s banner year, the company’s origin story and what convinced Quinn to participate in the business’s funding round years ago.

Early in his career, Henderson was the technical director of Special Web Projects at Emap, a UK media company. Later, he became the head of engineering for Flickr, the photo-sharing tool co-founded by Slack chief Steward Butterfield. In April 2009, he was reported to be starting a new stealth social gaming company with Butterfield, a project that would ultimately become Slack.

Quinn, for her part, added Slack to the Spark Capital portfolio in 2015, participating in the company’s $160 million Series E at a valuation of $2.8 billion. No small startup at the time, Slack already had 750,000 daily users and backing from Accel, Andreessen Horowitz, Social Capital, GV and Kleiner Perkins.

Quinn is a seasoned investor, known for striking deals with Coinbase, Glossier, Rover and Wealthfront, among others. She first entered the venture capital scene in 2012 as an investment partner at Kleiner Perkins, where she invested in early to mid-stage consumer tech startups. Quinn joined Spark Capital in 2015 to make growth-stage investments in companies across the board.

Before trying her hand at VC, she spent seven years in product management and strategic partnership development at Google and one year as the head of product at payments company Square.

Disrupt SF runs October 2 to October 4 at the Moscone Center in San Francisco. Tickets are available here.

16 Aug 2019

Google discloses its acquisition of mobile learning app Socratic as it relaunches on iOS

Google publicly disclosed its acquisition of homework helper app Socratic in an announcement this week, detailing the added support for the company’s A.I. technology and its relaunch on iOS. The acquisition apparently flew under the radar — Google says it bought the app last year.

According to one founder’s LinkedIn update, that was in March 2018. Google hasn’t responded to requests for comment for more details about the deal, but we’ll update if that changes.

Socratic was founded in 2013 Chris Pedregal and Shreyans Bhansali with the goal of creating a community that made learning accessible to all students.

Initially, the app offered a Quora-like Q&A platform where students could ask questions which were answered by experts. By the time Socratic raised $6 million in Series A funding back in 2015, its community had grown to around 500,000 students. The company later evolved to focus less on connecting users and more on utility.

It included a feature to take a photo of a homework question in order to get instant explanations through the mobile app launched in 2015. This is similar to many other apps in the space, like Photomath, Mathway, DoYourMath, and others.

However, Socratic isn’t just a math helper — it can also tackle subjects like science, literature, social studies, and more.

In February 2018, Socratic announced it would remove the app’s social features. That June, the company said it was closing its Q&A website to user contributions. This decision was met with some backlash of disappointed users.

Socratic explained the app and website were different products, and it was strategically choosing to focus on the former.

“We, as anyone, are bound by the constraints of reality—you just can’t do everything—which means making decisions and tradeoffs where necessary. This one is particularly painful,” wrote Community Lead Becca McArthur at the time.

That strategy, apparently, was to make Socratic a Google A.I.-powered product. According to Google’s blog post penned by Bhansali — now the Engineering Manager at Socratic — the updated iOS app uses A.I. technology to help users.

askaquestion

The new version of the iOS app still allows you to snap a photo to get answers, or you can speak your question.

For example, if a student takes a photo from a classroom handout or asks a question like “what’s the difference between distance and displacement?,” Socratic will return a top match, followed by explainers, a Q&A section, and even related YouTube videos and web links. It’s almost like a custom search engine just for your homework questions.

Google also says it has built and trained algorithms that can analyze the student’s question then identify the underlying concepts in order to point users to these resources. For students who need even more help, the app can break down the concepts into smaller, easy-to-understand lessons.

googleai v2

In addition, the app includes subject guides on over 1,000 higher education and high school topics, developed with help from educators. The study guides can help students prepare for tests or just better learn a particular concept.

explorer 56kHA30

“In building educational resources for teachers and students, we’ve spent a lot of time talking to them about challenges they face and how we can help,” writes Bhansali. “We’ve heard that students often get ‘stuck’ while studying. When they have questions in the classroom, a teacher can quickly clarify—but it’s frustrating for students who spend hours trying to find answers while studying on their own,” he says.

This is where Socratic will help.

That said, the acquisition could help Google in other ways, too. In addition to its primary focus as a homework helper, the acquisition could aid Google Assistant technology across platforms, as the virtual assistant could learn to answer more complex questions that Google’s Knowledge Graph didn’t already include.

The relaunched, A.I.-powered version of Socratic by Google arrived on Thursday on iOS, where it also discloses through the app update text the app is now owned by Google.

The Android version of the app will launch this fall.

 

16 Aug 2019

Northrop Grumman to build its OmegA rocket at NASA’s VAB as first commercial tenant

NASA is celebrating alongside Northrop Grumman at Kennedy Space Center in Florida, as the latter becomes the first commercial partner to make use of the Vehicle Assembly Building on-site at the base. The VAB, as its more commonly known, is a cavernous building that’s used to build and test rockets ahead of rolling them out to nearby launch pads, which was originally constructed by NASA to support the Apollo program.

Northrop Grumman will be using the VAB to build and prep its OmegA launch vehicle, a new rocket the company is building to transport intermediate and heavy payloads to orbit. It’s a fully expendable rocket, which Northrop is positioning as a lower-risk alternative to reusable models flown by competitors (cough SpaceX cough) and it’s also build as an ‘affordable’ option for those seeking launch services. OmegA is designed to help Northrop Grumman compete for future national security launch contracts, as well as support commercial customer missions.

NASA will also continue to use the VAB for the assembly of its own Space Launch System (SLS) rocket, which will be supporting missions in the Artemis program and transporting the Lockheed Martin-built Orion crew craft to space, and eventually to the Moon.

Kennedy also already plays host to rocket assembly and launch facilities for both SpaceX and Blue Origin, making it a hot spot for public-private space business activity.