Year: 2019

26 Aug 2019

Nvidia and VMware team up to make GPU virtualization easier

Nvidia today announced that it has been working with VMware to bring its virtual GPU technology (vGPU) to VMware’s vSphere and VMware Cloud on AWS. The company’s core vGPU technology isn’t new, but it now supports server virtualization to enable enterprises to run their hardware-accelerated AI and data science workloads in environments like VMware’s vSphere, using its new vComputeServer technology.

Traditionally (as far as that’s a thing in AI training), GPU-accelerated workloads tend to run on bare metal servers, which were typically managed separately from the rest of a company’s servers.

“With vComputeServer, IT admins can better streamline management of GPU accelerated
virtualized servers while retaining existing workflows and lowering overall operational costs,” Nvidia explains in today’s announcement. This also means that businesses will reap the cost benefits of GPU sharing and aggregation, thanks to the improved utilization this technology promises.

vComputeServer works with VMware Sphere, vCenter and vMotion, as well as VMware Cloud. Indeed, the two companies are using the same vComputeServer technology to also bring accelerated GPU services to VMware Cloud on AWS. This allows enterprises to take their containerized applications and from their own data center to the cloud as needed — and then hook into AWS’s other cloud-based technologies.

2019 08 25 1849

“From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Nvidia founder and CEO Jensen Huang . “Together with VMware, we’re designing the most advanced and highest performing GPU- accelerated hybrid cloud infrastructure to foster innovation across the enterprise.”

26 Aug 2019

India’s FreshToHome raises $20M to grow its fish, meat, vegetable, and milk e-commerce platform

FreshToHome, a Bangalore-based e-commerce startup that sells fresh fish, chicken, and other kinds of meat, has raised $20 million in a new financing round as it looks to expand its footprint in the nation.

The Series B round for the startup was led by Iron Pillar, with Joe Hirao, the founder of Japan’s ZIGExn also participating in the round. The startup, which closed its $11 million Series A financing round three months ago, has raised $33 million to date.

FreshToHome sells “100 percent” pure and fresh — it adds no preservative or any other chemicals —  vegetables and meat in Bangalore, Mumbai, and Pune — the latter two of which it recently entered. Unlike most other marketplaces, FreshToHome has built its own supply chain network, giving it better control over delivery.

As a result of this, FreshToHome is able to deliver the perishables on the same day and as soon as up to two hours, Shan Kadavil, CEO of FreshToHome, told TechCrunch in an interview.

The startup has amassed 650,000 customers and recently started to sell milk in Bangalore, another market segment that remains largely unstructured in the nation.

And that growth has helped the startup attract some attention. Several major players in the nation, including Amazon India and Walmart that have recently expanded to include perishable category, has held talks with FreshToHome to acquire a stake in the startup, a person familiar with the matter told TechCrunch.

The cold-chain market of India is estimated to grow to $37 billion in next five years.

More to follow…

26 Aug 2019

India’s BharatPe raises $50M to help merchants accept digital payments and secure working capital

BharatPe, a New Delhi-based firm that is enabling hundreds of thousands of merchants to start accepting digital payments for the first time each month and also giving them access to working capital, has raised $50 million as it looks to scale its business in the nation.

The Series B round for the one-year old startup was led by San Francisco-headquartered VC firm Ribbit Capital and London-based Steadview Capital, both of which have previously invested in a number of financial services in India.

Existing investors Sequoia Capital, Beenext Capital, and Insight Partners also participated in the round, pushing BharatPe’s all-time raise to $65 million. The new round valued the startup at $225 million, Ashneer Grover, cofounder and CEO of BharatPe, told TechCrunch in an interview.

BharatPe operates an eponymous service to help offline merchants accept digital payments. Even as India has already emerged as the second largest internet market with over 500 million users, much of country remains offline. Among those outside of the reach of the internet are merchants running small businesses such as roadside tea stalls.

Salman Khan with BharatPe Team

BharatPe team with actor Salman Khan, who is the firm’s brand ambassador.

To make these merchants comfortable in accepting digital payments, BharatPe relies on QR codes built as part of government-backed UPI payments infrastructure. “We get them to put up a QR code in their shops, and any customer that uses a UPI-powered payments app — which is now supported by nearly every payments app in India — can pay these shop owners digitally,” said Grover.

Through BharatPe, these merchants also get access to a simplified dashboard on their phones to track the customers who owe them money and get periodic reminders.

BharatPe has amassed more than 1.5 million merchants on its platform. It processes over 21 million transactions a month worth more than $83 million, Grover said.

BharatPe also allows merchants to secure short-term loans. New merchants can secure about $500 for a period of three months from BharatPe. As merchants spend more time on BharatPe, the firm expands the amount to about $2000.

The lending business is crucial to BharatPe. Payments app make little to no money through making transactions on their platforms. Those processing UPI payments can not even charge a small commission to merchants. “There is no money to be made in doing payments in India,” Grover said. But payment services can charge small interest on loans.

Access to working capital is a major challenge in developed markets such as India. According to a World Bank report, more than 2 billion people globally do not have access to working capital.

Grover said BharatPe aims to use the fund to add about 3.5 million merchants in the next 12 months. The firm has more than 2000 sales people who are adding 400,000 new merchants to BharatPe each month, he said.

Rest of the money will go into financing the loans on the platform and building new solutions. Later today, BharatPe will launch a new service to connect suppliers and merchants through BharatPe so that their accounts are in sync.

26 Aug 2019

Megvii, the Chinese startup unicorn known for facial recognition tech, files to go public in Hong Kong

Megvii Technology, the Beijing-based artificial intelligence startup known in particular for its facial recognition brand Face++, has filed for a public listing on the Hong Kong stock exchange.

Its prospectus did not disclose share pricing or when the IPO will take place, but Reuters reports that the company plans to raise between $500 million and $1 billion and list in the fourth quarter of this year. Megvii’s investors include Alibaba, Ant Financial and the Bank of China. Its last funding round was a Series D of $750 million announced in May that reportedly brought its valuation to more than $4 billion.

Founded by three Tsinghua University graduates in 2011, Megvii is among China’s leading AI startups, with its peers (and rivals) including SenseTime and Yitu. Its clients include Alibaba, Ant Financial, Lenovo, China Mobile and Chinese government entities.

The company’s decision to list in Hong Kong comes against the backdrop of an economic recession and political unrest, including pro-democracy demonstrations, factors that have contributed to a slump in the value of the benchmark Hang Seng index. Last month, Alibaba reportedly decided to postpone its Hong Kong listing until the political and economic environment becomes more favorable.

Megvii’s prospectus discloses both rapid growth in revenue and widening losses, which the company attributes to changes in the fair value of its preferred shares and investment in research and development. Its revenue grew from 67.8 million RMB in 2016 to 1.42 billion RMB in 2018, representing a compound annual growth rate of about 359%. In the first six months of 2019, it made 948.9 million RMB. Between 2016 and 2018, however, its losses increased from 342.8 million RMB to 3.35 billion RMB, and in the first half of this year, Megvii has already lost 5.2 billion RMB.

Investment risks listed by Megvii include high R&D costs, the U.S.-China trade war and negative publicity over facial recognition technology. Earlier this year, Human Rights Watch published a report that linked Face++ to a mobile app used by Chinese police and officials for mass surveillance of Uighurs in Xinjiang, but it later added a correction that said Megvii’s technology had not been used in the app. Megvii’s prospectus alluded to the report, saying that in spite of the correction, the report “still caused significant damages to our reputation which are difficult to completely mitigate.”

The company also said that despite internal measures to prevent misuse of Megvii’s tech, it cannot assure investors that those measures “will always be effective,” and that AI technology’s risks and challenges include “misuse by third parties for inappropriate purposes, for purposes breaching public confidence or even violate applicable laws and regulations in China and other jurisdictions, bias applications or mass surveillance, that could affect user perception, public opinions and their adoption.”

From a macroeconomic perspective, Megvii’s investment risks include the restrictions and tariffs placed on Chinese exports to the U.S. as part of the ongoing trade war. It also cited reports that Megvii is among the Chinese tech companies the U.S. government may add to trade blacklists. “Although we are not aware of, nor have we received any notification, that we have been added as a target of any such restrictions as of the date this Document, the existence of such media reports itself has already damaged our reputation and diverted our management’s attention,” the prospectus said. “Whether or not we will be included as a target for economic and trade restrictions is beyond our control.”

25 Aug 2019

Web host Hostinger says data breach may affect 14 million customers

Hostinger said it has reset user passwords as a “precautionary measure” after it detected unauthorized access to a database containing information on millions of its customers.

The breach is said to have happened on Thursday. The company said in a blog post it received an alert that one of its servers was improperly accessed. Using an access token found on the server, which can give access to systems without needing a username or a password, the hacker gained further access to the company’s systems, including an API database containing customer usernames, email addresses, and scrambled passwords.

Hostinger said the API database stored about 14 million customers records. The company has more than 29 million customers on its books.

“We have restricted the vulnerable system, and such access is no longer available,” said Daugirdas Jankus, Hostinger’s chief marketing officer.

“We are in contact with the respective authorities,” said Jankus.

hostinger

An email from Hostinger explaining the data breach. (Image: supplied)

News of the breach broke overnight. According to the company’s status page, affected customers have already received an email to reset their passwords.

The company said that financial data wasn’t taken in the breach, nor was customer website files or data affected.

But one customer who was affected by the breach accused the company of being potentially “misleading” about the scope of the breach.

A chat log seen by TechCrunch shows a customer support representative telling the customer it was “correct” that customers’ financial data can be retrieved by the API but that the company does “not store any payment data.” Hostinger uses multiple payment processors, the representative told the customer, but did not name them.

“They say they do not store payment details locally, but they have an API that can pull this information from the payment processor and the attacker had access to it,” the customer told TechCrunch.

We’ve reached out to Hostinger for more, but a spokesperson didn’t immediately comment when reached by TechCrunch.

Related stories:

25 Aug 2019

Original Content podcast: Netflix’s ‘Red Sea Diving Resort’ awkwardly mixes fiction and reality

“The Red Sea Diving Resort,” a new film on Netflix, is based on the true story of Mossad agents who took over an abandoned holiday resort in Sudan to smuggle Jewish Ethiopian refugees out of the country.

As we explain in the latest episode of the Original Content podcast, the film feels like it’s made in the “Argo” mold, fashioning a political thriller out of a too-crazy-for-fiction events. But it’s not as well-made as “Argo,” while struggling with the same challenges — mixing serious and comedic tones, and balancing real-world politics with blockbuster thrills.

The balance feels particularly awkward with “Captain America” actor Chris Evans playing the Mossad agent leading the operation. He’s not bad in the role, but there’s not much substance or complexity to it, and his presence underlines the feeling that we’re watching a Hollywood fantasy.

The film also skimps on providing any broader political context. Maybe it deserves credit for not holding the audience’s hand, but as a result, all we know who the good guys are and who the bad guys. Meanwhile, none of the refugees — not even Kabede, who’s played by Michael K. Williams of “The Wire” — fully emerges a three-dimensional character.

Before our review, we discuss the apparent end of Disney and Sony’s agreement making Spider-Man part of the Marvel Cinematic Universe, news that prompted outrage and petitions from unhappy fans.

You can listen in the player below, subscribe using Apple Podcasts or find us in your podcast player of choice. If you like the show, please let us know by leaving a review on Apple. You can also send us feedback directly. (Or suggest shows and movies for us to review!)

And if you’d like to skip ahead, here’s how the episode breaks down:

0:00 Intro
2:25 Spider-Man news
14:37 “Red Sea Diving Resort” review
37:02 “Red Sea Diving Resort” spoiler discussion

25 Aug 2019

The risks of amoral A.I.

Artificial intelligence is now being used to make decisions about lives, livelihoods, and interactions in the real world in ways that pose real risks to people.

We were all skeptics once. Not that long ago, conventional wisdom held that machine intelligence showed great promise, but it was always just a few years away. Today there is absolute faith that the future has arrived.

It’s not that surprising with cars that (sometimes and under certain conditions) drive themselves and software that beats humans at games like chess and Go. You can’t blame people for being impressed.

But board games, even complicated ones, are a far cry from the messiness and uncertainty of real-life, and autonomous cars still aren’t actually sharing the road with us (at least not without some catastrophic failures).

AI is being used in a surprising number of applications, making judgments about job performance, hiring, loans, and criminal justice among many others. Most people are not aware of the potential risks in these judgments. They should be. There is a general feeling that technology is inherently neutral — even among many of those developing AI solutions. But AI developers make decisions and choose tradeoffs that affect outcomes. Developers are embedding ethical choices within the technology but without thinking about their decisions in those terms.

These tradeoffs are usually technical and subtle, and the downstream implications are not always obvious at the point the decisions are made.

The fatal Uber accident in Tempe, Arizona, is a (not-subtle) but good illustrative example that makes it easy to see how it happens.

The autonomous vehicle system actually detected the pedestrian in time to stop but the developers had tweaked the emergency braking system in favor of not braking too much, balancing a tradeoff between jerky driving and safety. The Uber developers opted for the more commercially viable choice. Eventually autonomous driving technology will improve to a point that allows for both safety and smooth driving, but will we put autonomous cars on the road before that happens? Profit interests are pushing hard to get them on the road immediately.

Physical risks pose an obvious danger, but there has been real harm from automated decision-making systems as well. AI does, in fact, have the potential to benefit the world. Ideally, we mitigate for the downsides in order to get the benefits with minimal harm.

A significant risk is that we advance the use of AI technology at the cost of reducing individual human rights. We’re already seeing that happen. One important example is that the right to appeal judicial decisions is weakened when AI tools are involved. In many other cases, individuals don’t even know that a choice not to hire, promote, or extend a loan to them was informed by a statistical algorithm. 

Buyer Beware

Buyers of the technology are at a disadvantage when they know so much less about it than the sellers do. For the most part decision makers are not equipped to evaluate intelligent systems. In economic terms, there is an information asymmetry that puts AI developers in a more powerful position over those who might use it. (Side note: the subjects of AI decisions generally have no power at all.) The nature of AI is that you simply trust (or not) the decisions it makes. You can’t ask technology why it decided something or if it considered other alternatives or suggest hypotheticals to explore variations on the question you asked. Given the current trust in technology, vendors’ promises about a cheaper and faster way to get the job done can be very enticing.

So far, we as a society have not had a way to assess the value of algorithms against the costs they impose on society. There has been very little public discussion even when government entities decide to adopt new AI solutions. Worse than that, information about the data used for training the system plus its weighting schemes, model selection, and other choices vendors make while developing the software are deemed trade secrets and therefore not available for discussion.

Image via Getty Images / sorbetto

The Yale Journal of Law and Technology published a paper by Robert Brauneis and Ellen P. Goodman where they describe their efforts to test the transparency around government adoption of data analytics tools for predictive algorithms. They filed forty-two open records requests to various public agencies about their use of decision-making support tools.

Their “specific goal was to assess whether open records processes would enable citizens to discover what policy judgments these algorithms embody and to evaluate their utility and fairness”. Nearly all of the agencies involved were either unwilling or unable to provide information that could lead to an understanding of how the algorithms worked to decide citizens’ fates. Government record-keeping was one of the biggest problems, but companies’ aggressive trade secret and confidentiality claims were also a significant factor.

Using data-driven risk assessment tools can be useful especially in cases identifying low-risk individuals who can benefit from reduced prison sentences. Reduced or waived sentences alleviate stresses on the prison system and benefit the individuals, their families, and communities as well. Despite the possible upsides, if these tools interfere with Constitutional rights to due process, they are not worth the risk.

All of us have the right to question the accuracy and relevance of information used in judicial proceedings and in many other situations as well. Unfortunately for the citizens of Wisconsin, the argument that a company’s profit interest outweighs a defendant’s right to due process was affirmed by that state’s supreme court in 2016.

Fairness is in the Eye of the Beholder

Of course, human judgment is biased too. Indeed, professional cultures have had to evolve to address it. Judges for example, strive to separate their prejudices from their judgments, and there are processes to challenge the fairness of judicial decisions.

In the United States, the 1968 Fair Housing Act was passed to ensure that real-estate professionals conduct their business without discriminating against clients. Technology companies do not have such a culture. Recent news has shown just the opposite. For individual AI developers, the focus is on getting the algorithms correct with high accuracy for whatever definition of accuracy they assume in their modeling.

I recently listened to a podcast where the conversation wondered whether talk about bias in AI wasn’t holding machines to a different standard than humans—seeming to suggest that machines were being put at a disadvantage in some imagined competition with humans.

As true technology believers, the host and guest eventually concluded that once AI researchers have solved the machine bias problem, we’ll have a new, even better standard for humans to live up to, and at that point the machines can teach humans how to avoid bias. The implication is that there is an objective answer out there, and while we humans have struggled to find it, the machines can show us the way. The truth is that in many cases there are contradictory notions about what it means to be fair.

A handful of research papers have come out in the past couple of years that tackle the question of fairness from a statistical and mathematical point-of-view. One of the papers, for example, formalizes some basic criteria to determine if a decision is fair.

In their formalization, in most situations, differing ideas about what it means to be fair are not just different but actually incompatible. A single objective solution that can be called fair simply doesn’t exist, making it impossible for statistically trained machines to answer these questions. Considered in this light, a conversation about machines giving human beings lessons in fairness sounds more like theater of the absurd than a purported thoughtful conversation about the issues involved.

Image courtesy of TechCrunch/Bryce Durbin

When there are questions of bias, a discussion is necessary. What it means to be fair in contexts like criminal sentencing, granting loans, job and college opportunities, for example, have not been settled and unfortunately contain political elements. We’re being asked to join in an illusion that artificial intelligence can somehow de-politicize these issues. The fact is, the technology embodies a particular stance, but we don’t know what it is.

Technologists with their heads down focused on algorithms are determining important structural issues and making policy choices. This removes the collective conversation and cuts off input from other points-of-view. Sociologists, historians, political scientists, and above all stakeholders within the community would have a lot to contribute to the debate. Applying AI for these tricky problems paints a veneer of science that tries to dole out apolitical solutions to difficult questions. 

Who Will Watch the (AI) Watchers?

One major driver of the current trend to adopt AI solutions is that the negative externalities from the use of AI are not borne by the companies developing it. Typically, we address this situation with government regulation. Industrial pollution, for example, is restricted because it creates a future cost to society. We also use regulation to protect individuals in situations where they may come to harm.

Both of these potential negative consequences exist in our current uses of AI. For self-driving cars, there are already regulatory bodies involved, so we can expect a public dialog about when and in what ways AI driven vehicles can be used. What about the other uses of AI? Currently, except for some action by New York City, there is exactly zero regulation around the use of AI. The most basic assurances of algorithmic accountability are not guaranteed for either users of technology or the subjects of automated decision making.

GettyImages 823303786

Image via Getty Images / nadia_bormotova

Unfortunately, we can’t leave it to companies to police themselves. Facebook’s slogan, “Move fast and break things” has been retired, but the mindset and the culture persist throughout Silicon Valley. An attitude of doing what you think is best and apologizing later continues to dominate.

This has apparently been effective when building systems to upsell consumers or connect riders with drivers. It becomes completely unacceptable when you make decisions affecting people’s lives. Even if well-intentioned, the researchers and developers writing the code don’t have the training or, at the risk of offending some wonderful colleagues, the inclination to think about these issues.

I’ve seen firsthand too many researchers who demonstrate a surprising nonchalance about the human impact. I recently attended an innovation conference just outside of Silicon Valley. One of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.

When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out.” This is just one of the worst examples I’ve seen from many researchers who don’t have these issues on their radars. I suppose that requiring computer scientists to double major in moral philosophy isn’t practical, but the lack of concern is striking.

Recently we learned that Amazon abandoned an in-house technology that they had been testing to select the best resumes from among their applicants. Amazon discovered that the system they created developed a preference for male candidates, in effect, penalizing women who applied. In this case, Amazon was sufficiently motivated to ensure their own technology was working as effectively as possible, but will other companies be as vigilant?

As a matter of fact, Reuters reports that other companies are blithely moving ahead with AI for hiring. A third-party vendor selling such technology actually has no incentive to test that it’s not biased unless customers demand it, and as I mentioned, decision makers are mostly not in a position to have that conversation. Again, human bias plays a part in hiring too. But companies can and should deal with that.

With machine learning, they can’t be sure what discriminatory features the system might learn. Absent the market forces, unless companies are compelled to be transparent about the development and their use of opaque technology in domains where fairness matters, it’s not going to happen.

Accountability and transparency are paramount to safely using AI in real-world applications. Regulations could require access to basic information about the technology. Since no solution is completely accurate, the regulation should allow adopters to understand the effects of errors. Are errors relatively minor or major? Uber’s use of AI killed a pedestrian. How bad is the worst-case scenario in other applications? How are algorithms trained? What data was used for training and how was it assessed to determine its fitness for the intended purpose? Does it truly represent the people under consideration? Does it contain biases? Only by having access to this kind of information can stakeholders make informed decisions about appropriate risks and tradeoffs.

At this point, we might have to face the fact that our current uses of AI are getting ahead of its capabilities and that using it safely requires a lot more thought than it’s getting now.

25 Aug 2019

Crypto means cryptotheology

Cryptocurrencies are a religion as much as they are a technology. They almost have to be, given their adherents’ gargantuan ambition of fundamentally changing how the world works. This means they attract charlatans, lunatics, frauds, and false prophets, and furious battles are waged over doctrinal hairspliitting; but it also means they inspire intransigent beliefs which can, and do, unify many thousands of wildly different people across continents and time zones.

This occurred to me while I was rereading Gibbon’s Decline and Fall, as one does, and in particular its depictions of the early days of the Christian faith:

But whatever difference of opinion might subsist between the Orthodox [church], the Ebionites, and the Gnostics, concerning the divinity or the obligation of the Mosaic law, they were all equally animated by the same exclusive zeal; and by the same abhorrence for idolatry ..,. the established religions of Paganism were seen by the primitive Christians in a much more odious and formidable light. It was the universal sentiment both of the church and of heretics, that the daemons were the authors, the patrons, and the objects of idolatry.

For Orthodox church, Ebionites, and Gnostics, you can read perhaps, “Bitcoin maximalists”, “Blockchain not bitcoin,” and “Ethereum maximalists.” They disagree bitterly, but one view they all share is a disdain verging and frequently exceeding contempt for fiat currencies, untokenized assets, and most other aspects of money and finance as they are currently constructed. Instead they share a deep belief in the superiority, and inevitable supremacy, very different world.

The superstitious observances of public or private rites were carelessly practised, from education and habit, by the followers of the established religion. But as often as they occurred, they afforded the Christians an opportunity of declaring and confirming their zealous opposition. By these frequent protestations their attachment to the faith was continually fortified; and in proportion to the increase of zeal, they combated with the more ardor and success in the holy war, which they had undertaken against the empire of the demons.

I think few will disagree that, similarly, many cryptocurrency devotees seek out and seize every “opportunity of declaring and confirming their zealous opposition” to government money, central banks, rival maximalists, and other features of the monetary, financial, and/or centralized status quo.

The careless Polytheist, assailed by new and unexpected terrors, against which neither his priests nor his philosophers could afford him any certain protection, was very frequently terrified and subdued by the menace of eternal tortures. His fears might assist the progress of his faith and reason; and if he could once persuade himself to suspect that the Christian religion might possibly be true, it became an easy task to convince him that it was the safest and most prudent party that he could possibly embrace.

Similarly I don’t think it’s controversial to note that prophecies of the hyperinflation and collapse of national currencies, the downfall of central banks and fractional reserve banking in general, etc., are not unheard of among some of the … edgier … cryptocurrency people. One might even refer to the notion of “preaching the gospel” of deflationary, censorship-resistant cryptocurrency, sometimes in the hopes of scaring everyone who hears this doomsaying into buying some Bitcoin as a hedge.

Of course the religious parallels do not end with Gibbon. Cryptocurrencies were given to us not by a known, living, breathing, flawed human being, but by a pseudonymous verging-on-mythical quasi-demigod. (Cf eg “Satoshi’s Vision.”) Mythically speaking, that’s easily analogized to Prometheus granting humanity fire, or Moses bringing the stone tablets down from Mount Sinai. They have real and false prophets. There’s even a “Bitcoin Jesus.” And all promise a better world tomorrow, while demanding sacrifices and inconveniences today.

My tongue is obviously in cheek here — but I’m not entirely unserious. Of course all money is ultimately backed by faith (cf “full faith and credit.”) But this is I think unquestionably more true of cryptocurrencies, especially because, a decade on from their creation, they have failed — so far! — to transform the world to a degree anything like their proclaimed potential.

Bitcoin itself is apparently going from strength to strength, as can be seen in its increasing dominance of total cryptocurrency market capitalization, but it’s still beyond tiny compared to the rest of the financial world. Its total trading volume as I write this is roughly ~$15 billion per day, which admittedly sounds like a lot, but compared to the $5.1 trillion a day for the forex market as a whole, it’s roughly one-quarter of one percent.

More importantly, Bitcoin continues to technically iterate (although I’ve grown skeptical about Lightning, which it seems to me will always suffer from all the end-user inconveniences of prepaid credit cards, with few balancing advantages) and has hovered near or above $10,000 in value for months now. But the uncertainties and investigations regarding Tether remain a threatening cloud on its horizon.

As for other cryptocurrencies, though — well, these are complex times.

Ethereum, the best-known and perhaps most interesting, has gone from a wave of DAO excitement shortly after its launch, which faltered, to a wave of ICO madness and “fat protocol” DApps (decentralized applications), which also faltered, to the latest wave and watchword, “DeFi” aka decentralized finance. This essentially aims to reinvent all of Wall Street and the City of London on the blockchain(s), in the long term.

Meanwhile, the technical underpinnings that would allow Ethereum to scale to Wall Street size, known as “Ethereum 2.0,” remain more notional than real. I’m a big fan of Ethereum (my own pet crypto project is built on it) and I don’t think DeFi is doomed to failure … but under the circumstances I can understand skepticism creeping in among those who are not true believers.

There are plenty of other technically interesting cryptocurrency initiatives: from privacy coins such as ZCash, Monero, and Grin, to the use of Tezos by Brazil’s fifth largest bank for security tokens (again, DeFi), to the growth and stabilization of Cosmos’s “internet of blockchains,” to Blockstack’s total-app-installs graph beginning to look a little more exponential than linear, albeit with still-tiny y-axis numbers.

However, I think it’s also fair to say that now that cryptocurrencies are no longer new, unknown, and fascinating, interest among both individuals and enterprises who are not true believers has waned considerably. The cultural whiplash one experiences when transitioning from a conference full of people convinced they are building a new technology that will transform the fundamental order of the world, to outsiders (even technical outsiders) remarking “oh, is that still a thing?” is increasingly sharp.

That was probably true of the Christians after they ceased to be new and interesting, though, and in the end the Christians conquered the most powerful empire in the world from within. I am definitely not prophesying the same outcome here. I continue to think cryptocurrencies will remain a financial alternative, albeit a very significant and important one, used only by a few percent of people.

But I am saying that seeming increasingly distant from the external consensus reality, being driven by intransigent and sometimes bewildering faith as much as rational analysis, and ongoing associations with a cloud of crazy scandal and hangers-on snake-oil salespeople — all of which would be catastrophic signs for, say, a traditional new startup — can actually be indicators of the strength, not weakness, of a strange new religion. Something to bear in mind as we move into the second decade of cryptocurrencies.

25 Aug 2019

Week in Review: Google rips out its sweet tooth

Hey. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.

Last week, I talked about Snap’s bizarre decision to keep pursuing hardware without really changing their overarching strategy.


The big story

Google isn’t so sweet these days.

The company’s beloved naming scheme of alphabetizing sugary things dies with Android Pie. The company announced this week that they’re dumping the dessert scheme for a much more boring option. The new Android will be Android 10.

Google has been one of those companies that has always liked to keep its quirkiness at the forefront of its brand. Multi-colored logos and bikes and hats with spinners and Nooglers and nap pods might have been the fringe elements of a Google employee’s first week on the job, but that’s what the company’s branding still evoked for a lot of people. The company’s more whimsical elements have realistically always been removed from the real world of its business interests, but at this point, the company may only be able to take away from the quirkiness of its brand, Google is just something different now.

Rebrands always grab attention, and the companies always make broad, sweeping statements about the deep meaning about what the new logo or font or name mean to the mission of the product at hand. With Android 10, Google says that their chief concern was promoting the universality of the operating system’s branding.

[W]e’ve heard feedback over the years that the names weren’t always understood by everyone in the global community. For example, L and R are not distinguishable when spoken in some languages.

So when some people heard us say Android Lollipop out loud, it wasn’t intuitively clear that it referred to the version after KitKat. It’s even harder for new Android users, who are unfamiliar with the naming convention, to understand if their phone is running the latest version. We also know that pies are not a dessert in some places, and that marshmallows, while delicious, are not a popular treat in many parts of the world.

There’s certainly room to question whether this decision has more to do with the fact that there aren’t too many desserts starting with the letter Q that immediately come to mind, or that Google marketing has decided to sanitize the Android brand with a corporate wash.

Send me feedback
on Twitter @lucasmtny or email
lucas@techcrunch.com

On to the rest of the week’s news.

Apple Card available today card on iPhoneXs screen 082019

Trends of the week

Here are a few big news items from big companies, with green links to all the sweet, sweet added context:

  • Apple’s credit card goes wide
    The Apple Card might be the prettiest credit card in the wild, but as the iPhone-aligned credit card starts shipping to customers, we’ll find out soon whether its extra features are enough to take down more popular millennial cards. Read more about it here.
  • Overstock’s CEO resigns amid bizarre “deep state” revelations 
    Libertarian tech CEOs are often a special kind of eccentric, but Overstock’s Patrick Byrne set a new bar for strange with his revelation that he had gotten sucked into a Trump-Russia scandal under the guise of helping unearth Hillary Clinton’s secrets. I don’t really understand it, and it seems he understood even less, but it cost him his job. Read more here.
  • Now, even the scooters are autonomous
    Segway seems to believe that it’s revolutionized the world of transportation a few times now, but its latest product is just a bit over-teched. The Segway Kickscooter T60 adds autonomous driving capabilities to the city electric scooter, but it doesn’t use them quite the way you might think. Read more here.

Facebook Currency Hearing

Photo By Bill Clark/CQ Roll Call

GAFA Gaffes

How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:

  1. States looking to take on tech giants themselves:
    [States to launch antitrust investigation into big tech companies, reports say]
  2. Facebook keeps learning more about how much it knew about CA:
    [Facebook really doesn’t want you to read these emails]
  3. Not really a gaffe, but just embarrassing for Apple Card:
    [Apple warns against storing Apple Card near leather or denim]

Extra Crunch

Our premium subscription service had another week of interesting deep dives. My colleagues and I made our way to Y Combinator Demo Days this week where we screened the 160+ startups pitching and picked some favorites from both days..

The best 11 startups from YC Demo Days (Day 1)

“Eighty-four startups presented (read the full run-through of every company plus some early analysis here) and after chatting with investors, batch founders and of course, debating amongst ourselves, we’ve nailed down the 11 most promising startups to present during Day 1…”

The top 12 startups from YC Demo Days (Day 2)

“After two days of founders tirelessly pitching, we’ve reached the end of YC’s Summer 2019 Demo Days. TechCrunch witnessed more than 160 on-the-record startup pitches coming out of Y Combinator, spanning healthcare, B2B services, augmented reality and life-extending. Here are our favorites from Day 2…”

Here are some of our other top reads this week for premium subscribers. This week, we published a some analysis on the latest YC class and also dug deep into the perks new employees get at some top companies.

Sign up for more newsletters in your inbox (including this one) here.

24 Aug 2019

Uber tries to reassure customers that it takes safety seriously, following NYTimes book exerpt

It’s hard at times not to feel sorry for CEO Dara Khosrowshahi, given all that he inherited when he became the ride-share giant’s top boss back in April 2017. Among his many to-do items: take public a money-losing company whose private-market valuation had already soared past what many thought it was worth, clean-up the organization’s win-at-all-costs image, and win over employees who clearly remained loyal to Uber cofounder Travis Kalanick, an inimitable figure who Khosrowshahi was hired to replace.

Things are undoubtedly about to get worse, given the upcoming publication of a tell-all book about Uber authored by New York Times reporter Mike Isaac, which comes out in less than two weeks. In just one excerpt published yesterday by the newspaper, Isaac outlines how Uber misled customers into paying $1 more per ride by telling them Uber would use the proceeds to fund an “industry-leading background check process, regular motor vehicle checks, driver safety education, development of safety features in the app, and insurance.”

The campaign was hugely successful, according to Isaac, who reports that it brought in nearly half a billion dollars for Uber. Alas, according to employees who worked on the project, the fee was devised primarily to add $1 of pure margin to each trip.

Om Malik, a former tech journalist turned venture capitalist, published a tongue-in-cheek tweet yesterday after reading the excerpt, writing, “Apology from @dkhos coming any minute — we are different now.”

Malik was close. Instead of an apology, Uber today sent riders an email titled, somewhat ominously, “Your phone number stays hidden in the app.” The friendly reminders continues on to tell customers that their “phone number stays hidden when you call or text your driver through the app,” that “[p]ickup and dropoff locations are not visible in a driver’s trip history,” and that “for additional privacy, if you don’t want to share your exact address, request a ride to or from the nearest cross streets instead.”

The email was clearly meant to reassure riders, some of whom might be absorbing negative press about Uber and wondering if it cares about them at all. But not everyone follows Uber as closely as industry watchers in Silicon Valley, and either way, what the email mostly accomplishes is to remind customers that riding in an Uber involves a bit of risk.

Stressing that the company is “committed to safety” is the debating equivalent of a so-called negative pregnant, wherein a denial implies its affirmative opposite. It’s Uber shooting itself in the foot.

Uber

It would have been more on point — too much so, perhaps — for Uber to email riders that when it talks about safety, it means business (and not the kind where it swindles its own customers).

Either way, it underscores the tricky terrain Uber is left to navigate right now. Though campaigns like Uber’s so-called “safe rides fee” was orchestrated under the leadership of Kalanick — who did whatever it took to scale the company — it’s Khosrowshahi’s problem now.

So is the fact that the company’s shares have been sinking since its IPO in early May; that Uber’s cost-cutting measures will be scrutinized at every turn (outsiders especially relished the company’s decision to save on employees’ work anniversaries by cutting out helium balloons in favor of stickers); and that Uber appears to be losing the battle, city by city, against labor activists and its own drivers who want to push up the minimum wage paid to drivers.

And those are just three of many daunting challenges that Khosrowshahi has been tasked with figuring out  (think food delivery, self-driving technologies, foreign and domestic opponents). No doubt Isaac’s book will highlight plenty of others.

How Uber handles the inevitable wave of bad publicity that comes with it remains to be seen. We don’t expect Khosrowshahi to come out swinging; that’s not his style. But we also hope the company doesn’t take to emailing riders directly. It’s great if Uber is taking customer safety more seriously than it might have under Kalanick’s leadership, but reaching out to tell riders how to remain safe from their Uber drivers isn’t the way to do it, especially without acknowledging in any way why it’s suddenly so eager to have the conversation.