Category: UNCATEGORIZED

10 Aug 2019

Tesla Model 3 owner implants RFID chip to turn her arm into a key

Forget the keycard or phone app, one software engineer is trying out a new way to unlock and start her Tesla Model 3.

Amie DD, who has a background in game simulation and programming, recently released a video showing how she “biohacked” her body. The software engineer removed the RFID chip from the Tesla Model 3 valet card using acetone, then placed it into a biopolymer, which was injected through a hollow needle into her left arm. A professional who specializes in body modifications performed the injection.

You can watch the process below, although folks who don’t like blood should consider skipping it. Amie DD also has a page on Hackaday.io that explains the project and the process.

The video is missing one crucial detail. It doesn’t show whether the method works. TechCrunch will update the post once a new video delivering the news is released.

Amie is not new to biohacking. The original idea was to use the existing RFID implant chip that was already in her hand to be able to start the Model 3. That method, which would have involved taking the Java applet and writing it onto her own chip, didn’t work because of Tesla’s security. So, Amie DD opted for another implant.

Amie DD explains why and how she did this in another, longer video posted below. She also talks a bit about her original implant in her left hand, which she says is used for “access control.” She uses it to unlock the door of her home, for instance.

 

 

10 Aug 2019

Y Combinator-backed Trella brings transparency to Egypt’s trucking and shipping industry

Y Combinator has become one of the key ways that startups from emerging markets get the attention of American investors. And arguably no clutch of companies has benefitted more from Y Combinator’s attention than startups from emerging markets tackling the the logistics market.

On the heels of the success the accelerator had seen with Flexport, which is now valued at over $1 billion — and the investment in the billion-dollar Latin American on-demand delivery company, Rappi, several startups from the Northern and Southern Africa, Latin America, and Southeast Asia have gone through the program to get in front of Silicon Valley’s venture capital firms. These are companies like Kobo360, NowPorts, and, most recently, Trella.

The Egyptian company founded by Omar Hagrass, Mohammed el Garem, and Pierre Saad already has 20 shippers using its service and is monitoring and managing the shipment of 1,500 loads per month.

“The best way we would like to think of ourselves is that we would like to bring more transparency to the industry,” says Hagrass.

Like other logistics management services, Trella is trying to consolidate a fragmented industry around its app that provides price transparency and increases efficiency by giving carriers and shippers better price transparency and a way to see how cargo is moving around the country.

If the model sounds similar to what Kobo360 and Lori Systems are trying to do in Nigeria and Kenya, respectively, it’s because Hagrass knows the founders of both companies.

Technology ecosystems in these emerging markets are increasingly connected. For instance, Hagrass worked with Kobo360 founder Obi Ozor at Uber before launching Trella. And through Trella’s existing investors (the company has raised $600,000 in financing from Algebra Ventures) Hagrass was introduced to Josh Sandler the chief executive of Lori Systems.

The three executives often compare notes on their startups and the logistics industry in Northern and Southern Africa, Hagrass says.

While each company has unique challenges, they’re all trying to solve an incredibly difficult problem and one that has huge implications for the broader economies of the countries in which they operate.

For Hagrass, who participated in the Tahrir Square protests, launching Trella was a way to provide help directly to everyday Egyptians without having to worry about the government.

“It’s three times more expensive to transport goods in Egypt than in the U.S.,” says Hagrass. “Through this platform I can do something good for the country.”

10 Aug 2019

How tech is transforming the intelligence industry

At a conference on the future challenges of intelligence organizations held in 2018, former Director of National Intelligence Dan Coats argued that he transformation of the American intelligence community must be a revolution rather than an evolution. The community must be innovative and flexible, capable of rapidly adopting innovative technologies wherever they may arise.

Intelligence communities across the Western world are now at a crossroads: The growing proliferation of technologies, including artificial intelligence, Big Data, robotics, the Internet of Things, and blockchain, changes the rules of the game. The proliferation of these technologies – most of which are civilian, could create data breaches and lead to backdoor threats for intelligence agencies. Furthermore, since they are affordable and ubiquitous, they could be used for malicious purposes.

The technological breakthroughs of recent years have led intelligence organizations to challenge the accepted truths that have historically shaped their endeavors. The hierarchical, compartmentalized, industrial structure of these organizations is now changing, revolving primarily around the integration of new technologies with traditional intelligence work and the redefinition of the role of the humans in the intelligence process.

Take for example Open-Source Intelligence (OSINT) – a concept created by the intelligence community to describe information that is unclassified and accessible to the general public. Traditionally, this kind of information was inferior compared to classified information; and as a result, the investments in OSINT technologies were substantially lower compared to other types of technologies and sources. This is changing now; agencies are now realizing that OSINT is easy to acquire and more beneficial, compared to other – more challenging – types of information.

Yet, this understanding trickle down solely, as the use of OSINT by intelligence organizations still involves cumbersome processes, including slow and complex integration of unclassified and classified IT environments. It isn’t surprising therefore that intelligence executives – for example the Head of State Department’s Intelligence Arm or the nominee to become the Director of the National Reconnaissance Office – recently argued that one of the community’s grandest challenges is the quick and efficient integration of OSINT in its operations.

Indeed, technological innovations have always been central to the intelligence profession. But when it came to processing, analyzing, interpreting, and acting on intelligence, however, human ability – with all its limitations – has always been considered unquestionably superior. That the proliferation of data and data sources are necessitating a better system of prioritization and analysis, is not questionable. But who should have a supremacy? Humans or machines?

A man crosses the Central Intelligence Agency (CIA) seal in the lobby of CIA Headquarters in Langley, Virginia, on August 14, 2008. (Photo: SAUL LOEB/AFP/Getty Images)

Big data comes for the spy business

The discourse is tempestuous. Intelligence veterans claim that there is no substitute for human judgment. They argue that artificial intelligence will never be capable of comprehending the full spectrum of considerations in strategic decision-making, and that it cannot evaluate abstract issues in the interpretation of human behavior. Machines can collect data and perhaps identify patterns, but they will never succeed in interpreting reality as do humans. Others also warn of the ethical implications of relying on machines for life-or-death situations, such as a decision to go to war.

In contrast, techno-optimists claim that human superiority, which defined intelligence activities over the last century, is already bowing to technological superiority. While humans are still significant, their role is no longer exclusive, and perhaps not even the most important in the process. How can the average intelligence officer cope with the ceaseless volumes of information that the modern world produces?

From 1995 to 2016, the amount of reading required of an average US intelligence researcher, covering a low-priority country, grew from 20,000 to 200,000 words per day. And that is just the beginning. According to forecasts, the volume of digital data that humanity will produce in 2025 will be ten times greater than is produced today. Some argue this volume can only be processed – and even analyzed – by computers.

Of course, the most ardent advocates for integration of machines into intelligence work are not removing human involvement entirely; even the most skeptical do not doubt the need to integrate artificial intelligence into intelligence activities. The debate centers on the question of who will help whom: machines in aid of humans or humans in aid of machines.

Most insiders agree that the key to moving intelligence communities into the 21st century lies in breaking down inter- and intra-organizational walls, including between
the services within the national security establishment; between the public sector, the private sector, and academia; and between intelligence services of different countries.

It isn’t surprising therefore that the push toward technological innovation is a part of the current intelligence revolution. The national security establishment already recognizes that the private sector and academia are the main drivers of technological innovation.

Alexander Karp, chief executive officer and co-founder of Palantir Technologies Inc., walks the grounds after the morning sessions during the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, U.S., on Thursday, July 7, 2016. Billionaires, chief executive officers, and leaders from the technology, media, and finance industries gather this week at the Idaho mountain resort conference hosted by investment banking firm Allen & Co. Photographer: David Paul Morris/Bloomberg via Getty Images

Private services and national intelligence

In the United States there is dynamic cooperation between these bodies and the security community, including venture capital funds jointly owned by the government and private companies.

Take In-Q-Tel – a venture capital fund established 20 years ago to identify and invest in companies that develop innovative technology which serves the national security of the United States, thus positioning the American intelligence community at the forefront of technological development. The fund is an independent corporation, which is not subordinate to any government agency, but it maintains constant coordination with the CIA, and the US government is the main investor.

It’s most successful endeavor, which has grown to become a multi-billion company though somewhat controversial, is Palantir, a data-integration and knowledge management provider. But there are copious other startups and more established companies, ranging from sophisticated chemical detection (e.g. 908devices), automated language translations (e.g. Lilt), and digital imagery (e.g. Immersive Wisdom) to sensor technology (e.g. Echodyne), predictive analytics (e.g. Tamr) and cyber security (e.g. Interset).

Actually, a significant part of intelligence work is already being done by such companies, small and big. Companies like Hexagon, Nice, Splunk, Cisco and NEC offer intelligence and law enforcement agencies a full suite of platforms and services, including various analytical solutions such as video analytics, identity analytics, and social media analytics . These platforms help agencies to obtain insights and make predictions from the collected and historic data, by using real-time data stream analytics and machine learning. A one-stop-intelligence-shop if you will.

Another example of government and non-government collaboration is the Intelligence Advanced Research Projects Activity (IARPA) – a nonprofit organization which reports to the Director of National Intelligence (DNI). Established in 2006, IARPA finances advanced research relevant to the American intelligence community, with a focus on cooperation between academic institutions and the private sector, in a broad range of technological and social sciences fields. With a relatively small annual operational budget of around $3bn, the fund gives priority to multi-year development projects that meet the concrete needs of the intelligence community. The majority of the studies supported by the fund are unclassified and open to public scrutiny, at least until the stage of implementation by intelligence agencies.

Image courtesy of Bryce Durbin/TechCrunch

Challenging government hegemony in the intelligence industry 

These are all exciting opportunities; however, the future holds several challenges for intelligence agencies:

First, intelligence communities lose their primacy over collecting, processing and disseminating data. Until recently, the organizations Raison D’etre was, first and foremost, to obtain information about the enemy, before said enemy could disguise that information.

Today, however, a lot of information is available, and a plethora of off-the-shelf tools (some of which are free) allow all parties, including individuals, to collect, process and analyze vast amounts of data. Just look at IBM’s i2 Analyst’s Notebook, which gives analysts, for just few thousand dollars, multidimensional visual analysis capabilities so they can quickly uncover hidden connections and patterns in data. Such capacities belonged, just until recently, only to governmental organizations.

A second challenge for intelligence organizations lies in the nature of the information itself and its many different formats, as well as in the collection and processing systems, which are usually separate and lacking standardization. As a result, it is difficult to merge all of the available information into a single product. For this reason, intelligence organizations are developing concepts and structures which emphasize cooperation and decentralization.

The private market offers a variety of tools for merging information; ranging from simple off-the-shelf solutions, to sophisticated tools that enable complex organizational processes. Some of the tools can be purchased and quickly implemented – for example, data and knowledge sharing and management platforms – while others are developed by the organizations themselves to meet their specific needs.

The third challenge relates to the change in the principle of intelligence prioritization. In the past, the collection of information about a given target required a specific decision to do so and dedicated resources to be allocated for that purpose, generally at the expense of allocation of resources to a different target. But in this era of infinite quantities of information, almost unlimited access to information, advanced data storage capabilities and the ability to manipulate data, intelligence organizations can now collect and store information on a massive scale, without the need to immediately process it – rather, it may be processed as required.

This development leads to other challenges, including: the need to pinpoint the relevant information when required; to process the information quickly; to identify patterns and draw conclusions from mountains of data; and to make the knowledge produced accessible to the consumer. It is therefore not surprising that most of the technological advancements in the intelligence field respond to these challenges, bringing together technologies such as big data with artificial intelligence, advanced information storage capabilities and advanced graphical presentation of information, usually in real time.

Lastly, intelligence organizations are built and operate according to concepts developed at the peak of the industrial era, which championed the principle of the assembly line, which are both linear and cyclical. The linear model of ​​the intelligence cycle – collection, processing, research, distribution and feedback from the consumer – has become less relevant. In this new era, the boundaries between the various intelligence functions and between the intelligence organizations and their eco-system are increasingly blurred.

 

The brave new world of intelligence

A new order of intelligence work is therefore required, and therefore intelligence organizations are currently in the midst of a redefinition process. Traditional divisions – e.g. between collection and research; internal security organizations and positive intelligence; and public and private sectors – all become obsolete. This is not another attempt to carry out structural reforms: there is a sense of epistemological rupture which requires a redefinition of the discipline, the relationships that intelligence organizations have with their environments – from decision makers to the general public – and the development of new structures and conceptions.

And of course, there are even wider concerns; legislators need to create a legal framework that accurately incorporates the assessments based on data in a way that takes the predictive aspects of these technologies into account and still protects the privacy and security rights of individual citizens in nation states that have a respect for those concepts.

Despite the recognition of the profound changes taking place around them, today’s intelligence institutions are still built and operate in the spirit of Cold War conceptions. In a sense, intelligence organizations have not internalized the complexity that characterizes the present time – a complexity which requires abandoning the dichotomous (within and outside) perception of the intelligence establishment, as well as the understanding of the intelligence enterprise and government bodies as having a monopoly on knowledge; concepts that have become obsolete in an age of decentralization, networking and increasing prosperity.

Although some doubt the ability of intelligence organizations to transform and adapt themselves to the challenges of the future, there is no doubt that they must do so in this era in which speed and relevance will determine who prevails.

10 Aug 2019

Telegram introduces new feature to prevent users from texting too often in a group

Telegram, a popular instant messaging app, has introduced a new feature to give group admins on the app better control over how members engage, the latest in a series of interesting features it has rolled out in recent months to expand its appeal.

The feature, dubbed Slow Mode, allows a group administrator to dictate how often a member could send a message in the group. If implemented by a group, members who have sent a text will have to wait between 30 seconds to as long as an hour before they can say something again in that group.

telegram slow mode groups

The messaging platform, which had more than 200 million monthly active users as of early 2018, said the new feature was aimed at making conversations in groups “more orderly” and raising the “value of each individual message.” It suggested admins to “keep [the feature] on permanently, or toggle as necessary to throttle rush hour traffic.”

As tech platforms including WhatsApp grapple with containing the spread of misinformation on their messaging services, the new addition from Telegram, which has largely remained immune to any similar controversies, illustrates how proactively it works on adding features to control the flow of information on its platform.

In comparison, WhatsApp has enforced limits on how often a user could forward a text message and is using machine learning techniques to weed out fraudulent users during the sign up procedure itself.

Shivnath Thukral, Director of Public Policy for Facebook in India and South Asia, said at a conference this month that virality of content has dropped by 25% to 30% on WhatsApp since the messaging platform introduced limits on forwards.

Telegram isn’t marketing the “Slow Mode” as a way to tackle the spread of false information, though. Instead, it says the feature would give users more “peace of mind.” Indeed, unlike WhatsApp, which allows up to 256 users to be part of a group, up to a whopping 200,000 users can join a Telegram group.

On a similar tone, Telegram has also added an option that will enable users to send a message without invoking a sound notification at the recipient’s end. “Simply hold the Send button to have any message or media delivered without sound,” the app maker said. “Your recipient will get a notification as usual, but their phone won’t make a sound – even if they forgot to enable the Do Not Disturb mode.”

Telegram has also introduced a range of other small features such as the ability for group owners to add custom titles for admins. Videos on the app now display thumbnail previews when a user scrubs through them, making it easier to them to find the right moment. Like YouTube, users on Telegram too can now share a video that jumps directly at a certain timestamp. Users can also animate their emojis now — if they are into that sort of thing.

In June, Telegram introduced a number of location-flavored features to allow users to quickly exchange contact details without needing to type in digits.

10 Aug 2019

Telegram introduces new feature to prevent users from texting too often in a group

Telegram, a popular instant messaging app, has introduced a new feature to give group admins on the app better control over how members engage, the latest in a series of interesting features it has rolled out in recent months to expand its appeal.

The feature, dubbed Slow Mode, allows a group administrator to dictate how often a member could send a message in the group. If implemented by a group, members who have sent a text will have to wait between 30 seconds to as long as an hour before they can say something again in that group.

telegram slow mode groups

The messaging platform, which had more than 200 million monthly active users as of early 2018, said the new feature was aimed at making conversations in groups “more orderly” and raising the “value of each individual message.” It suggested admins to “keep [the feature] on permanently, or toggle as necessary to throttle rush hour traffic.”

As tech platforms including WhatsApp grapple with containing the spread of misinformation on their messaging services, the new addition from Telegram, which has largely remained immune to any similar controversies, illustrates how proactively it works on adding features to control the flow of information on its platform.

In comparison, WhatsApp has enforced limits on how often a user could forward a text message and is using machine learning techniques to weed out fraudulent users during the sign up procedure itself.

Shivnath Thukral, Director of Public Policy for Facebook in India and South Asia, said at a conference this month that virality of content has dropped by 25% to 30% on WhatsApp since the messaging platform introduced limits on forwards.

Telegram isn’t marketing the “Slow Mode” as a way to tackle the spread of false information, though. Instead, it says the feature would give users more “peace of mind.” Indeed, unlike WhatsApp, which allows up to 256 users to be part of a group, up to a whopping 200,000 users can join a Telegram group.

On a similar tone, Telegram has also added an option that will enable users to send a message without invoking a sound notification at the recipient’s end. “Simply hold the Send button to have any message or media delivered without sound,” the app maker said. “Your recipient will get a notification as usual, but their phone won’t make a sound – even if they forgot to enable the Do Not Disturb mode.”

Telegram has also introduced a range of other small features such as the ability for group owners to add custom titles for admins. Videos on the app now display thumbnail previews when a user scrubs through them, making it easier to them to find the right moment. Like YouTube, users on Telegram too can now share a video that jumps directly at a certain timestamp. Users can also animate their emojis now — if they are into that sort of thing.

In June, Telegram introduced a number of location-flavored features to allow users to quickly exchange contact details without needing to type in digits.

10 Aug 2019

Kobalt, Apple and smartwatches, Hadoop, customer support, and social work and AI

The Kobalt EC-1: How a Swedish saxophonist built Kobalt, the world’s next music unicorn

My favorite pieces we host on Extra Crunch are our EC-1 series of in-depth profiles and analyses of high-flying, fascinating startups. We launched Extra Crunch with a multi-part series on Patreon, and then we covered augmented reality and Pokémon Go creator Niantic and gaming platform Roblox.

This week, Extra Crunch media columnist Eric Peckham launched the first part of his three-part EC-1 series looking at music “operating system” startup Kobalt. Kobalt is not perhaps a popular household name like Roblox, but it’s influence is heard pretty much every single time you listen to music. Kobalt is upending the traditional infrastructure to track music plays to capture royalties for artists, an industry that today still involves people literally walking into bars and writing down what’s playing. From that base, Kobalt wants to expand into services to empower the next-generation of stars and mid-market talent.

What I loved about this story is that not only is Kobalt completely rebuilding an otherwise stagnant industry, but its founder and CEO is also such a dynamic individual. Willard Ahdritz was a former saxophonist whose band was essentially abandoned by their music label — even while that label wouldn’t give up the economics that would allow the band to continue (some founders may have similar experiences with their venture investors). Ahdritz would eventually start his own music label called Telegram, and a bit later started Kobalt to solve the problems he kept running into on the music publishing side.

It’s been almost two decades, but today, Kobalt offers a suite of technologies and services and has its crosshairs on the big three labels — Universal, Sony, and Warner. It’s also raised a boatload of venture capital and is closing in on a unicorn valuation. Read the full story, learn more about this analytically fascinating business, and get ready for parts two and three coming soon.

Refer a friend to Extra Crunch

10 Aug 2019

We’re all doomed, 2019 edition

Every year the great and good (and bad) of the hacker/information-security world descend on Las Vegas for a week of conferences, in which many present their latest discoveries, and every year I try to itemize the most interesting (according to me) Black Hat talks for TechCrunch. Do not assume I attended all or even most of these. There are far too many for anyone to attend. But hopefully they’ll give you a sense of the state of the art.

First, though, let me just note that this post title is intended as sardonic. Yes, there is a lot of sloppy software out there, and yes, a lot of smart people keep finding holes, bugs, exploits, and design flaws even in good software, but we are not actually all doomed, and the belief that we are, and that anything connected to the Internet can be and probably has been hacked — an attitude which I like to call “security nihilism” — is spectacularly counterproductive.

In truth there is a lot of extremely good security out there, especially amid the big tech companies, and it keeps getting better, as the market for 0-days (previously undiscovered exploits) indicates. Most (though certainly not all) of the exploits below have already been reported and fixed, and patches have been rolled out. That said, much of the world has a lot of work to do to catch up with, say, Apple and Google’s security teams. Without further ado, the best-sounding talks of 2019:


Liveness Detection Hacking, from Tencent’s Xuanwu Security Lab, discusses how to trick “liveness” detectors for face or voice ID (or, perhaps, cryptocurrency KYC) by injecting fake video or audio streams, or, better yet, ordinary glasses with ordinary tape attached, which, best of all, they have named X-glasses.


All the 4G Modules Could Be Hacked, from Baidu’s Security Lab, recounts the researchers’ investigation of 4G modules for IoT devices — the components which connect machines to the Internet via cell networks, basically. As their summary memorably puts it, “We carried out this initiative and tested all the major brand 4G modules in the market (more than 15 different types). The results show all of them have similar vulnerabilities” and ends with the equally memorable “how to use these vulnerabilities to attack car entertainment systems of various brands and get remote control of cars.” Extra points for the slide with ‘Build Zombie cars (just like Furious 8)’, too.


Arm IDA and Cross Check: Reversing the Boeing 787’s Core Network by Ruben Santamarta of IOActive talks about how, after discovering an accidentally public directory of sensitive Boeing information online(!), Santamarta developed a chain of exploits that could conceivably lead from the Internet to the “Common Data Network” of a 787. Boeing strongly disputes this.

I have considerable respect for Santamarta, whose work I’ve written about before, and as he put it: “Boeing communicated to IOActive that there are certain built-in compiler-level mitigations [author’s note: !!] that, in their point of view, prevent these vulnerabilities from being successfully exploited. IOActive was unable to locate or validate the existence of those mitigations in the CIS/MS firmware version we analyzed. When asked, Boeing declined to answer whether these mitigations might have been added on a later version … We hope that a determined, highly capable third party can safely confirm that these vulnerabilities are not exploitable … We are confident owners and operators of these aircraft would welcome such independent validation and verification.” Indeed. But hey, if you can’t trust Boeing, who can you trust, right?


Reverse Engineering WhatsApp Encryption for Chat Manipulation, from researchers at Check Point Software, described how to abuse WhatsApp group chat to put words into others’ mouths, albeit only in quote texts, and send private messages which look like group-chat messages. (Note however that this is post-decryption, so you have to already be a legitimate member of the chat.)


In Behind the scenes of iOS and Mac Security, Ivan Krstić, Apple’s Head of Security Engineering, publicly spoke about Apple security. That’s remarkable enough right there! In particular, it’s worth noting his exegesis of how Find My works while preserving privacy, and that Apple is going to start to offer rooted iPhones to security researchers.


Simultaneously, an organization almost as devoted to secrecy as Apple revealed more about their security practices too. Kudos! I refer of course to the NSA, who came onstage to discuss their reverse-engineering framework Ghidra, and how it came to be open-sourced.


In Critical Zero Days Remotely Compromise the Most Popular Real-Time OS, researchers from Armis Security explained how VxWorks, a real-time OS you’ve never heard of but which runs on over 2 billion machines including aircraft, medical devices, industrial control systems, and spacecraft, also boasts vulnerabilities in esoteric corners of its TCP/IP stack that could lead to remote code execution. So that’s not good.


Finally, in Exploring the New World : Remote Exploitation of SQLite and Curl, Tencent’s Blade Team (yes, Chinese researchers have been absolutely killing it this year) showed how we actually are all doomed. I kid, I kid. But while you’ve probably never heard of them, SQLite and Curl are two absolutely fundamental software components — an incredibly widely used compact single-file database and a command-line networking tool, respectively — and used an exploit of the former to successfully remote attack Google Home, and the latter to attack curl clients such as PHP/Apache as well as Git. Ouch.

10 Aug 2019

Most EU cookie ‘consent’ notices are meaningless or manipulative, study finds

New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.

As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.

But many don’t — even now, more than a year later.

The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)

The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.

The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.

The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.

They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.

Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.

A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.

And while they found that nearly all cookie notices (92%) contained a link to the site’s privacy policy, only a third (39%) mention the specific purpose of the data collection or who can access the data (21%).

The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.

Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.

“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.

“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”

The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.

This is where consent gets manipulated — to flip visitors’ preference for privacy.

They found that the more choices offered in a cookie notice, the more likely visitors were to decline the use of cookies. (Which is an interesting finding in light of the vendor laundry lists frequently baked into the so-called “transparency and consent framework” which the industry association, the Internet Advertising Bureau (IAB), has pushed as the standard for its members to use to gather GDPR consents.)

“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”

Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:

Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.

The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).

Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.

“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.

They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.

Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.

This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.

Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.

It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.

You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)

Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.

For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)

While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.

(Those of you reading TechCrunch back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)

Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.

Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.

While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.

So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.

But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)

The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.

However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.

Here’s what the ICO says on Recital 25 of the ePrivacy Directive:

  • ‘specific website content’ means that you should not make ‘general access’ subject to conditions requiring users to accept non-essential cookies – you can only limit certain content if the user does not consent;
  • the term ‘legitimate purpose’ refers to facilitating the provision of an information society service – ie, a service the user explicitly requests. This does not include third parties such as analytics services or online advertising;

So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.

It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)

Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.

“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.

“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”

“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies. 

Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.

“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless  something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.

“And the conflict of interest for Google with Chrome are obvious.”

The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.

At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.

They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.

Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).

And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.

Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.

With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.

However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.

So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.

A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”

“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added. 

“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”

All of which suggests that the movement, when it comes, must come from a reforming adtech industry.

With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.

GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.

The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.

10 Aug 2019

Startups Weekly: Angel vs. VC

Hello and welcome back to Startups Weekly, a weekend newsletter that dives into the week’s noteworthy startups and venture capital news. Before I jump into today’s topic, let’s catch up a bit. Last week, I wrote about DoorDash’s acquisition of Caviar, which no one saw coming. Before that, I jotted down some notes on SoftBank’s second Vision Fund.

Remember, you can send me tips, suggestions and feedback to kate.clark@techcrunch.com or on Twitter @KateClarkTweets. If you don’t subscribe to Startups Weekly yet, you can do that here.

What’s new?

Alternative funding mechanisms, like Clearbanc’s revenue share model, may be on the rise but most Silicon Valley startups still turn to venture capital to get their company off the ground. As I’ve previously said in this newsletter, VC spending in 2019 is reaching record-highs, already surpassing $62 billion. Angel investment, for its part, also continues to occupy a meaningful portion of private investment. So far this year, individual angels and angel groups in the U.S. have doled out $10 billion to startups. 

Angel investors are not traditional venture capitalists bogged down by processes, quotas and fund economics. Rather, they’re deep-pocketed former operators (often) with expansive networks. For some, their capital is superior to VCs; for others, a VC’s ability to write larger checks and participate in additional fundings as their company grows makes VC the only viable option. 

So how do early-stage startups decide who’s money to take (if they have that luxury)? Here’s what Jana Messerschmidt, both an investor at Lightspeed Venture Partners and a founding partner of the angel network #ANGELS, had to say: “It’s dependent on who the individual angel is, as well as who the individual partner is. In these frothier times, I encourage founders to interview investors who take a slot on their cap table with the same rigor they would a potential employee.”

Ben Ling, an early Facebook executive who spent years angel investing only to launch his own institutional venture capital fund, Bling Capital, tells TechCrunch the plus side of angel investors is that they are oftentimes less sensitive to valuations. Angels, while they can’t usually invest as much capital as a VC, tend to offer better terms and be approving of less rigid deal structures.

But being an investor isn’t an angel’s full-time job, typically. The limited amount of time an angel can give each company may be problematic for a founder seeking mentorship but a non-issue for a more experienced founder, who is simply seeking an individual passionate about her or his vision. 

Given the rise in venture capital investment overall, more founders and former operators are running into wealth and opting to try on the VC hat for size. And more and more, those people are becoming professional investors with an appetite for a bigger pool of capital. Ling, as mentioned, decided last year to raise his first institutional fund, a $60 million effort, for example: “I think it’s rare for super angels to ‘beat’ firms for most regular financings but it certainly can happen,” Ling tells TechCrunch.

Presumably, that’s why he and many others (Cyan Banister, Keith Rabois, Ron Conway, James Currier) made the switch to “real” VC — to win over the best deals. As angels turn into VCs, whether your startup’s money came from one person’s wallet or an institutional fund matters a whole lot less. Just make sure you have good people investing in your company, and while you are it, make sure they’re diverse too.

That’s all for now… Onto the news.

WeWork IPO update

"WeWork" co-operative co-working space on March 13, 2013 in Washington, DC

Bloomberg reported Friday that WeWork was expected to make its IPO filing available next week. Soon, we can all finally get an inside look at the co-working giant’s financials. A reminder, WeWork was last valued at an eye-popping $47 billion and it wants to raise some $3.5 billion in the IPO. Skeptical? Me too.

#Equitypod

If you enjoy this newsletter, be sure to check out TechCrunch’s venture-focused podcast, Equity. In this week’s episode, available here, Equity co-host Alex Wilhelm and I discuss a new trend in venture capital: sperm storage startups. Equity drops every Friday at 6:00 am PT, so subscribe to us on Apple PodcastsOvercast and Spotify.

Big Deals

Little Deals

M&A

Airbnb announced its acquisition of Urbandoor, a platform that offers extended stays to corporate clients, earlier this week. The terms of the deal were not disclosed, though an SEC filing connected with the deal emerged Friday, indicating the deal was worth more than $80 million in what’s likely a combination of cash and stock. We’ve got all the details on the deal here.

Healthtech & VC

Now it’s time for your weekly reminder to sign up for Extra Crunch. For a low price, you can learn more about the startups and venture capital ecosystem through exclusive deep dives, Q&As, newsletters, resources and recommendations and fundamental startup how-to guides. Here’s a passage from my personal favorite EC post of the week:

“Why is tech still aiming for the healthcare industry? It seems full of endless regulatory hurdles or stories of misguided founders with no knowledge of the space, running headlong into it, only to fall on their faces. Theranos is a prime example of a founder with zero health background or understanding of the industry — and just look what happened there! The company folded not long after founder Elizabeth Holmes came under criminal investigation and was barred from operating in her own labs for carelessly handling sensitive health data and test results…”

Read the rest of Sarah Buhr’s piece, ‘What leading healthtech VCs are interested in,’ here.

Just For Fun

10 Aug 2019

How safe are school records? Not very, says student security researcher

If you can’t trust your bank, government or your medical provider to protect your data, what makes you think students are any safer?

Turns out, according to one student security researcher, they’re not.

Eighteen-year-old Bill Demirkapi, a recent high school graduate in Boston, Massachusetts, spent much of his latter school years with an eye on his own student data. Through self-taught pen testing and bug hunting, Demirkapi found several vulnerabilities in a his school’s learning management system, Blackboard, and his school district’s student information system, known as Aspen and built by Follett, which centralizes student data, including performance, grades, and health records.

The former student reported the flaws and revealed his findings at the Def Con security conference on Friday.

“I’ve always been fascinated with the idea of hacking,” Demirkapi told TechCrunch prior to his talk. “I started researching but I learned by doing,” he said.

Among one of the more damaging issues Demirkapi found in Follett’s student information system was an improper access control vulnerability, which if exploited could have allowed an attacker to read and write to the central Aspen database and obtain any student’s data.

Blackboard’s Community Engagement platform had several vulnerabilities, including an information disclosure bug. A debugging misconfiguration allowed him to discover two subdomains, which spat back the credentials for Apple app provisioning accounts for dozens of school districts, as well as the database credentials for most if not every Blackboard’s Community Engagement platform, said Demirkapi.

“School data or student data should be taken as seriously as health data. The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”
Bill Demirkapi, security researcher

Another set of vulnerabilities could have allowed an authorized user — like a student — to carry out SQL injection attacks. Demirkapi said six databases could be tricked into disclosing data by injecting SQL commands, including grades, school attendance data, punishment history, library balances, and other sensitive and private data.

Some of the SQL injection flaws were blind attacks, meaning dumping the entire database would have been more difficult but not impossible.

In all, over 5,000 schools and over five million students and teachers were impacted by the SQL injection vulnerabilities alone, he said.

Demirkapi said he was mindful to not access any student records other than his own. But he warned that any low-skilled attacker could have done considerable damage by accessing and obtaining student records, not least thanks to the simplicity of the database’s password. He wouldn’t say what it was, only that it was “worse than ‘1234’.”

But finding the vulnerabilities was only one part of the challenge. Disclosing them to the companies turned out to be just as tricky.

Demirkapi admitted that his disclosure with Follett could have been better. He found that one of the bugs gave him improper access to create his own “group resource,” such as a snippet of text, which was viewable to every user on the system.

“What does an immature 11th grader do when you hand him a very, very, loud megaphone?” he said. “Yell into it.”

And that’s exactly what he did. He sent out a message to every user, displaying each user’s login cookies on their screen. “No worries, I didn’t steal them,” the alert read.

“The school wasn’t thrilled with it,” he said. “Fortunately, I got off with a two-day suspension.”

He conceded it wasn’t one of his smartest ideas. He wanted to show his proof-of-concept but was unable to contact Follett with details of the vulnerability. He later went through his school, which set up a meeting, and disclosed the bugs to the company.

Blackboard, however, ignored Demirkapi’s responses for several months, he said. He knows because after the first month of being ignored, he included an email tracker, allowing him to see how often the email was opened — which turned out to be several times in the first few hours after sending. And yet the company still did not respond to the researcher’s bug report.

Blackboard eventually fixed the vulnerabilities, but Demirkapi said he found that the companies “weren’t really prepared to handle vulnerability reports,” despite Blackboard ostensibly having a published vulnerability disclosure process.

“It surprised me how insecure student data is,” he said. “School data or student data should be taken as seriously as health data,” he said. “The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”

He said if a teenager had discovered serious security flaws, it was likely that more advanced attackers could do far more damage.

Heather Phillips, a spokesperson for Blackboard, said the company appreciated Demirkapi’s disclosure.

“We have addressed several issues that were brought to our attention by Mr. Demirkapi and have no indication that these vulnerabilities were exploited or that any clients’ personal information was accessed by Mr. Demirkapi or any other unauthorized party,” the statement said. “One of the lessons learned from this particular exchange is that we could improve how we communicate with security researchers who bring these issues to our attention.”

Follet spokesperson Tom Kline said the company “developed and deployed a patch to address the web vulnerability” in July 2018.

The student researcher said he was not deterred by the issues he faced with disclosure.

“I’m 100% set already on doing computer security as a career,” he said. “Just because some vendors aren’t the best examples of good responsible disclosure or have a good security program doesn’t mean they’re representative of the entire security field.”