Author: azeeadmin

21 Nov 2018

Apple puts its next generation of AI into sharper focus as it picks up Silk Labs

Apple’s HomePod is a distant third behind Amazon and Google when it comes to market share for smart speakers that double up as home hubs, with less than 5 percent share of the market for these devices in the US, according to one recent survey. And its flagship personal assistant Siri has also been determined to lag behind Google when it comes to comprehension and precision. But there are signs that the company is intent on doubling down on AI, putting it at the center of its next generation of products, and it’s using acquisitions to help it do so.

The Information reports that Apple has quietly acquired Silk Labs, a startup based out of San Francisco that had worked on AI-based personal assistant technology both for home hubs and mobile devices.

There are two notable things about Silk’s platform that set it apart from that of other assistants: it was able to modify its behaviour as it learned more about its users over time (both using sound and vision); and it was designed to work on-device — a nod to privacy and concerns about “always on” speakers listening to you; improved processing on devices; and the constraints of the cloud and networking technology.

Apple has not returned requests for comment, but we’ve found that at least some of Silk Labs’ employees appear already to be working for Apple (LinkedIn lists nine employees for Silk Labs, all with engineering backgrounds).

That means it’s not clear if this is a full acquisition or an aqui-hire — as we learn more we will update this post — but bringing on the team (and potentially the technology) speaks to Apple’s need and interest in doubling down to build products that are not mere repeats of what we already have on the market.

Silk Labs first emerged in February 2016, the brainchild of Adreas Gal, the former CTO of Mozilla, who had also created the company’s ill-fated mobile platform, Firefox OS; and Michael Vines, who came from Qualcomm. (Vines, incidentally, moved on in June 2018 to become the principal engineer for a blockchain startup, Solana.)

Its first product was originally conceived as integrated software and hardware: the company raised just under $165,000 in a Kickstarter to build and ship Sense, a smart speaker that would provide a way to control connected home devices and answer questions, and — with a camera integrated into the device — be able to monitor rooms and learn to recognise people and their actions.

Just four months later, Silk Labs announced that it would shelve the Sense hardware to focus specifically on the software, called Silk, after it said it started to receive inquiries from OEMs interested in getting a version of the platform to run on their own devices (it also raised money outside of Kickstarter, around $4 million).

Potentially, Silk could give those OEMs a way of differentiating from the plethora of devices that are already on the market. In addition to products from the likes of Google and Amazon, there are also a number of speakers powered by those assistants, along with devices using Cortana from Microsoft.

When Silk Labs announced that it was halting hardware development, it noted that it was in talks for some commercial partnerships (while at the same time open sourcing a basic version of the Silk platform for creating communications with IoT devices).

Silk Labs never disclosed the names of those partners, but buying and shutting down the company would be one way of making sure that the technology stays with just one company.

It’s temping to match up what Silk Labs has built up to now specifically with Apple’s efforts specifically in its own smart speaker, the HomePod.

Specifically, it could provide it with a smarter engine that learns about users, operates even if internet is down, and secures user privacy, and crucially becomes a lynchpin for how you might operate everything else in your connected life.

That would make for a mix of features that would clearly separate it from the market leader of the moment, and play into aspects — specifically privacy — that people are increasingly starting to value more.

But if you consider that spectrum of hardware and services that Apple is now involved in, you can see that the Silk team, and potentially the IP, may end up having also a wider impact.

Apple has had a mixed run when it comes to AI. The company was an early mover when it first put its Siri voice assistant into its iPhone 4S in 2011, and for a long time people would always mention it in conjunction with Amazon and Google (less so Microsoft) when they would lament about how a select few technology companies were snapping up all the AI talent, leaving little room for other companies to get a look in to building products or having a stake in how it was being developed on a larger scale.

More recently, though, it appears that the likes of Amazon — with its Alexa-powered portfolio of devices — and Google have stolen a march when it comes to consumer products built with AI technologies at their core, and as their primary interface with their users. (Siri, if anything, sometimes feels like a nuisance when you accidentally call it into action by pressing the Touch Bar or the home button on my older model iPhone.)

But it’s almost certainly wrong to guess Apple — one of the world’s biggest companies, known for playing its hand close to its chest — has lost its way in this area.

There have been a few indications, though, that it’s getting serious and rethinking how it is doing things.

A few months ago, it reorganized its AI teams under ex-Googler John Giannandrea, losing some talent in the process but more significantly setting the pace for how its Siri and Core ML teams would work together and across different projects at the company, from developer tools to mapping and more. 

Apple has also made dozens of smaller and bigger acquisitions in the last several years that speak to it picking up more talent and IP in the quest to build out its AI muscle across different areas, from augmented reality and computer vision through to big data processing at the back end. It’s even acquired other startups, such as VocalIQ in England, that focus on voice interfaces and ‘learn’ from interactions.

To be sure, the company has started to see a deceleration of iPhone unit sales (if not revenues: prices are higher than ever), and that will mean a focus newer devices, and ever more weight put on the services that run on these devices. Services can be augmented and expanded, and they represent recurring income — two big reasons why Apple will shift to putting more investment into them.

Expect to see that AI net covering not just the iPhone, but computers, Apple’s smart watch, its own smart speaker, the HomePod, Apple Music, Health and your whole digital life.

21 Nov 2018

Driven to safety — it’s time to pool our data

For most Americans, the thought of cars autonomously navigating our streets still feels like a science fiction story. Despite the billions of dollars invested into the industry in recent years, no self-driving car company has proven that its technology is capable of producing mass-market autonomous vehicles in even the near-distant future.

In fact, a recent IIHS investigationidentified significant flaws in assisted driving technology and concluded that in all likelihood “autonomous vehicle[s] that can go anywhere, anytime” will not be market-ready for “quite some time.” The complexity of the problem has even led Uber to potentially spin off their autonomous car unit as a means of soliciting minority investments — in short, the cost of solving this problem is time and billions (if not trillions) of dollars.

Current shortcomings aside, there is a legitimate need for self-driving technology: every year, nearly 1.3 million people die and 2 million people are injured in car crashes. In the U.S. alone, 40,000 people died last year due to car accidents, putting car accident-based deaths in the top 15 leading causes of death in America. GM has determined that the major cause for 94 percent of those car crashes is human error. Independent studies have verified that technological advances such as ridesharing have reduced automotive accidents by removing from our streets drivers who should not be operating vehicles.

The challenge of developing self-driving technology is rooted in replicating the incredibly nuanced cognitive decisions we make every time we get behind the wheel.

We should have every reason to believe that autonomous driving systems — determinant and finely tuned computers always operating at peak performance — will all but eliminate on-road fatalities. The challenge of developing self-driving technology is rooted in replicating the incredibly nuanced cognitive decisions we make every time we get behind the wheel.

Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together.

The competition generated from a level data playing field could create tens of thousands of new high-tech jobs.

The siloed petabytes (and soon exabytes) of road data that these companies hoard should be, without giving away trade secrets or information about their models, pooled into a nonprofit consortium, perhaps even a government entity, where every mile driven is shared and audited for quality. By all means, take this data to your private company and consume it, make your models smarter and then provide more road data to the pool to make everyone smarter — and more importantly, increase the pace at which we have truly autonomous vehicles on the road, and their safety once they’re there.

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition.

Though big technology and automotive companies may scoff at the idea of sharing their data, the competition generated from a level data playing field could create tens of thousands of new high-tech jobs. Any government dollar spent on aggregating road data would be considered capitalized as opposed to lost — public data sets can be reused by researchers for AI and cross-disciplinary projects for many years to come.

The most ethical (and most economically sensible) choice is that all data generated by autonomous vehicle companies should be part of a contiguous system built to make for a smarter, safer humanity. We can’t afford to wait any longer.

21 Nov 2018

Thanksgiving travel nightmare projected to hit these U.S. cities the worst

The latest data from Inrix paints a dismal picture for folks traveling Wednesday (that’s today!) ahead of the Thanksgiving holiday.

Drivers in Boston, New York City and San Francisco will see the largest delays with drive times nearly quadruple the norm, according to AAA and Inrix, which aggregates and analyzes traffic data collected from vehicles and highway infrastructure.

AAA is projecting 54.3 million Americans will travel 50 miles or more away from home this Thanksgiving, a 4.8 percent increase over last year. It’s a record breaker of a year for travel. This weekend will see the highest Thanksgiving travel volume in more than a dozen years (since 2005), with 2.5 million more people taking to the nation’s roads, skies, rails and waterways compared with last year, according to AAA.

The roads will be particularly packed, according to Inrix. Some 48.5 million people —5 percent more than last year —will travel on roads this Thanksgiving holiday, a period defined as Wednesday, November 21 to Sunday, November 25.

The worst travel times? It’s already here in some places. San Francisco, Chicago and Los Angeles will be particularly dicey Wednesday with travel times twice to four times longer than usual. Other cities projected to have the worst travel times include Detroit along U.S. Highway 23 North, Houston on the north and southbound Interstate 45 and Los Angeles, particularly northbound on Interstate 5.

inrix thanksgiving traffic data

Even travel times to airports have increased Wednesday. Travel times from downtown Seattle to the airport via Interstate 5 south and Chicago to O’Hare Airport via the Kennedy Expressway will be particularly long. The Chicago route, for instance, is projected to take 1 hour and 27 minutes at the peak time between 1:30 pm and 3:30 pm CT.

There are alternatives, of course. In most cases, the best days to travel will be on Thanksgiving Day, Friday or Saturday, according to Inrix and AAA.

21 Nov 2018

Facebook has poached the DoJ’s Silicon Valley antitrust chief

Facebook has recruited Kate Patchen, a veteran of the U.S. Department of Justice who led its antitrust office in Silicon Valley, to be a director and associate general counsel of litigation.

Patchen takes up her post amid ongoing scandals and reputation crises for her new employer, joining Facebook this month, according to her LinkedIn profile.

The move was spotted earlier by the FT, which reports that Facebook also posted a job listing on LinkedIn for a “lead counsel” in Washington to handle competition issues two weeks ago — suggesting a broader effort to bulk up its in-house expertise.

Patchen brings to her new employer a wealth of experience on the antitrust topic, having spent 16 years at the DoJ, where she began as a trial attorney before becoming an assistant chief in the antitrust division in 2014. Two years later she was made chief.

We reached out to Facebook about the hire and it acknowledged our email but did not immediately provide comment on its decision to recruit a specialist in antitrust enforcement.

The social media giant certainly has plenty playing on its mind on this front.

In 2016 it landed firmly on lawmakers’ radar and in hot political waters when the extent of Kremlin-funded election interference activity on the platform first emerged. Since then a string of security and data misuse scandals have only dialed up the political pressure on Facebook.

Domestic lawmakers are now most actively discussing how to regulate social media. Although competition scrutiny is increasing on big tech in general, with calls from some quarters to break up platform giants as a fix for a range of damaging impacts.

The FT notes, for example, that democratic lawmakers recently introduced legislation to address “the threat of economic concentration.” And the sight of Democrats pushing for tougher competition enforcement suggests the party’s love affair with Silicon Valley tech giants is well and truly over.

In Europe, competition regulators have already moved against big tech, issuing two very large fines in recent years against Google products, with more investigations ongoing.

Amazon is also now on the Commission’s radar. At a national level, EU competition regulators have been paying increasing attention to how the adtech industry is dominated by the duopoly of Google and Facebook.

Patchen, meanwhile, joins Facebook at the same time as some long-serving veterans are headed out the door — including public policy chief Elliot Schrage.

Schrage’s departure has been in train for some months, but a leaked internal memo we obtained this week suggests he’s being packaged up as a convenient fall guy for a freshly cracked public relations scandal.

Last month Facebook announced it was hiring more new blood: Former deputy prime minister of the U.K., Nick Clegg, to be its new head of global policy and comms — with Schrage slated then to be staying on in an advisory capacity.

In other recent senior leadership moves, Facebook CSO Alex Stamos also left the company this summer, while chief legal officer Colin Stretch announced he would leave at the end of the year.

But according to a Recode report this month, Stretch has now put his exit on hold — until at least next summer — apparently deciding to stay to help out with ongoing legal and political crises.

21 Nov 2018

Google Assistant iOS update lets you say ’Hey Siri, OK Google’

Apple probably didn’t intend to let competitors take advantage of Siri Shortcuts this way, but you can now launch Google Assistant on your iPhone by saying “Hey Siri, OK Google” .

But don’t expect a flawless experience — it takes multiple steps. After updating the Google Assistant app on iOS, you need to open the app to set up a new Siri Shortcut for Google Assistant.

As the name suggests, Siri Shortcuts lets you record custom phrases to launch specific apps or features. For instance, you can create Siri Shortcuts to play your favorite playlist, launch directions to a specific place, text someone and more. If you want to chain multiple actions together, you can even create complicated algorithms using Apple’s Shortcuts app.

By default, Google suggests the phrase “OK Google”. You can choose something shorter, or “Hey Google” for instance. After setting that up, you can summon Siri and use this custom phrase to launch Google’s app.

You may need to unlock your iPhone or iPad to let iOS open the app. The Google Assistant app then automatically listens to your query. Again, you need to pause and wait for the app to appear before saying your query.

This is quite a cumbersome walk-around and I’m not sure many people are going to use it. But the fact that “Hey Siri, OK Google” exists is still very funny.

On another note, Google Assistant is still the worst when it comes to your privacy. The app pushes you to enable “web & app activity”, the infamous all-encompassing privacy destroyer. If you activate that setting, Google will collect your search history, your Chrome browsing history, your location, your credit card purchases and more.

It’s a great example of dark pattern design. If you haven’t enabled web & app activity, there’s a flashy blue banner at the bottom of the app that tells you that you can “unlock more Assistant features”.

When you tap it, you get a cute little animated drawing to distract you from the text. There’s only one button that says “More”. If you tap it, the “More” button becomes “Turn on” — many people are not even going to see “No thanks” on the bottom left.

It’s a classic persuasion method. If somebody asks you multiple questions and you say yes every time, you’ll tend to say yes to the last question even if you don’t agree with it. You tapped on “Get started” and “More” so you want to tap on the same button one more time. If you say no, Google asks you one more time if you’re 100 percent sure.

So make sure you read everything and you understand that you’re making a privacy tradeoff by using Google Assistant.

21 Nov 2018

Drone.io, Packet team on free continuous delivery service for open source developers

Drone.io, makers of the open source Drone continuous integration/continuous delivery tool (CI/CD), announced Drone Cloud today, a new CI/CD cloud service that it’s making available for free to open source projects. The company is teaming with Packet, which is offering to run the service for free on its servers.

Drone.io co-founder Brad Rydzewski says his company is “a container-native continuous delivery platform, and its goal is to help automate the developer workflow from testing to release.” Continuous delivery is an approach built on cloud-native, the idea that you can manage cloud and on prem with single set of management tools. From a developer standpoint, it relies on containers as a way to continuously deliver application updates as changes happen.

As part of that approach, the newly announced Drone Cloud provides a publicly hosted CI/CD cloud offering. “It’s free for the open source community. So it’s an open source only offering. There’s no paid plan, and it’s only available to public Github repositories,” Rydzewski explained.

Rydzewski says the service was born out of a need for his own project. He found it hard to find publicly-hosted solutions that offered a way to test a variety of operating systems and chip architectures. “It’s really something we needed for ourselves. The Drone community wanted to run Drone on Windows on Arm 64 processors. We actually had no way to build and test the software on these platforms because there was just no publicly-hosted solution that we could use,” he explained.

When they decided to build this solution for themselves, they figured it was going to be useful to other open source projects that also want to ship and support multiple operating systems and architectures. They got Packet on board to offer the infrastructure.

Packet offers a variety of server options with different operating systems and chips, and Rydzewski says this was an important consideration for the open source developers who will take advantage of this service. Packet is making the latest Intel Xeon Scalable, AMD EPYC and Arm-based servers available to users of the Drone Cloud service for free as part of a multi-year donation to support the project.

“As open source software is deployed to increasingly diverse environments, from traditional data centers to cars and even drones, the need for multi-architecture builds has exploded,” Jacob Smith, co-founder and CMO at Packet said in a statement. This is Packet’s way of supporting that effort.

Drone does not intend to establish a paid layer for Drone Cloud, according to Rydzewski, but he hopes it shows what Drone can do, which in turn could attract some developers to the paid version of the product. In addition to supporting the open source version, the company offers a paid version that can be installed on premises as a part of a private cloud-native development environment.

21 Nov 2018

Facebook appeals UK data watchdog’s £500k Cambridge Analytica fine

Facebook has said it will appeal against a £500,000 penalty issued by the UK’s data watchdog this summer following a lengthy investigation into the Cambridge Analytica data misuse scandal.

Facebook told the regulator an estimated one million UK users were among the 87M of its users whose private data was harvested by Dr. Aleksandr Kogan and his company Global Science Research back in 2014 — which passed the data to the now defunct political consultancy, Cambridge Analytica.

Their intent had been to build psychographic profiles of US voters. Although Kogan shared the harvested Facebook data more widely — and the UK regulator is still looking into all the places it ended up.

In July, the ICO announced it intended to fine Facebook the maximum possible amount under the UK’s old data protection regime — saying it was “clear” the company had contravened the law “by failing to keep users data safe” when its systems allowed Kogan’s app to scrape Facebook user data.

It confirmed the penalty a month ago, with commissioner Elizabeth Denham saying then: “Facebook failed to sufficiently protect the privacy of its users before, during and after the unlawful processing of this data. A company of its size and expertise should have known better and it should have done better.”

Although the text of its October decision includes the admission that the ICO had not found evidence that any UK Facebook users’ data had actually been passed to Kogan.

“Facebook has asserted that the only individuals whose personal data was used in this way [shared by Kogan with third parties including Cambridge Analytica] were US residents,” it writes on this, before adding that even if Facebook’s assertion is correct some US residents would also have been UK users “from time to time” (e.g. if visiting the UK) — and thus would fall under its remit.

It also pointed to “serious risk” to UK users’ data being material to its decision, writing: “Dr. Kogan and/or GSR were put in a position where they were effectively at liberty (if they so chose) to use the personal data of UK residents for such purposes, or to share such data with persons or companies who would use it for such purposes.”

On that basis, Facebook appears to be resting its appeal against the ICO decision on its own assertion to the ICO that there’s no evidence of UK users’ data being used.

Commenting on its decision to appeal against the ICO’s fine in a statement, Anna Benckert, its EMEA VP & associate general counsel, said:

We have said before that we wish we had done more to investigate claims about Cambridge Analytica in 2015. We made major changes to our platform back then and have also significantly restricted the information app developers can access. And we are investigating all historic apps that had access to large amounts of information before we changed our platform policies in 2014.

The ICO’s investigation stemmed from concerns that UK citizens’ data may have been impacted by Cambridge Analytica, yet they now have confirmed that they have found no evidence to suggest that information of Facebook users in the UK was ever shared by Dr Kogan with Cambridge Analytica, or used by its affiliates in the Brexit referendum.

Therefore, the core of the ICO’s argument no longer relates to the events involving Cambridge Analytica. Instead, their reasoning challenges some of the basic principles of how people should be allowed to share information online, with implications which go far beyond just Facebook, which is why we have chosen to appeal.

For example, under ICO’s theory people should not be allowed to forward an email or message without having agreement from each person on the original thread. These are things done by millions of people every day on services across the internet, which is why we believe the ICO’s decision raises important questions of principle for everyone online which should be considered by an impartial court based on all the relevant evidence.

We’ve reached out to the ICO for comment. Update: An ICO spokesperson said: “Any organisation issued with a monetary penalty notice by the Information Commissioner has the right to appeal the decision to the First-tier Tribunal. The progression of any appeal is a matter for the tribunal. We have not yet been notified by the Tribunal that an appeal has been received.”

Last month Denham explained the decision to impose the maximum penalty on Facebook by saying: “We considered these contraventions to be so serious we imposed the maximum penalty under the previous legislation. The fine would inevitably have been significantly higher under the GDPR. One of our main motivations for taking enforcement action is to drive meaningful change in how organizations handle people’s personal data.”

This summer her office issued its first ever enforcement notice under the new GDPR data protection regime against Canadian data firm AIQ, which had supplied software and services to the disgraced Cambridge Analytica.

But last month the ICO issued a narrower enforcement notice, replacing the earlier notice, after AIQ appealed.

21 Nov 2018

Affetto is the wild boy head robot of your nightmares

Affetto is a robot that can smile at you while it pierces your soul with its endless, dead state. Created by researchers at Osaka University, this crazy baby-head robot can mimic human emotions by scrunching up its nose, smiling, and even closing its eyes and frowning. Put it all together and you get a nightmare from which there is no sane awakening!

Android robot faces have persisted in being a black box problem: they have been implemented but have only been judged in vague and general terms,” study first author Hisashi Ishihara says. “Our precise findings will let us effectively control android facial movements to introduce more nuanced expressions, such as smiling and frowning.”

We last saw Affetto in action in 2011 when it was even more frightening than it is now. The researchers have at least added some skin and hair to this cyberdemon, allowing us the briefest moment of solace as we stare into Affetto’s dead eyes and hope it doesn’t gum us to death. Ain’t the future grand?

The goal, obviously, is to lull humans into a state of calm as the rest of Affetto’s body, spiked and bladed, can whir them to pieces. The researchers write:

A trio of researchers at Osaka University has now found a method for identifying and quantitatively evaluating facial movements on their android robot child head. Named Affetto, the android’s first-generation model was reported in a 2011 publication. The researchers have now found a system to make the second-generation Affetto more expressive. Their findings offer a path for androids to express greater ranges of emotion, and ultimately have deeper interaction with humans.

The researchers investigated 116 different facial points on Affetto to measure its three-dimensional movement. Facial points were underpinned by so-called deformation units. Each unit comprises a set of mechanisms that create a distinctive facial contortion, such as lowering or raising of part of a lip or eyelid. Measurements from these were then subjected to a mathematical model to quantify their surface motion patterns.

Pro tip: Just slap one of these on your Roomba and send it around the house. The kids will love it and the cat will probably die of a heart attack.

21 Nov 2018

Amazon admits it exposed customer email addresses, but refuses to give details

Amazon’s renowned secrecy encompasses its response to a new security issue, withholding info that could help victims protect themselves.

Amazon emailed users Tuesday, warning them that a it exposed an unknown number of customer email addresses after a “technical error” on its website.

When reached for comment, an Amazon spokesperson told TechCrunch that the issue exposed names as well as email addresses. “We have fixed the issue and informed customers who may have been impacted.” The company emailed all impacted users to be cautious.

In response to a request for specifics, a spokesperson said the company had “nothing to add beyond our statement.” The company denies there was a data breach of its website of any of its systems, and says it’s fixed the issue, but dismissed our request for more info including the cause, scale, and circumstances of the error.

Amazon’s reticence here puts those impacted at greater risk. Users don’t know which of Amazon’s sites was impacted, who their email address could have been exposed to, or any ballpark figure of the number of victims. It’s also unclear whether it has or plans to contact any government regulatory bodies.

“We’re contacting you to let you know that our website inadvertently disclosed your email address due to a technical error,” said Amazon in the email with the subject line: “Important Information about your Amazon.com Account.” The only details Amazon provided were that: “The issue has been fixed. This is not a result of anything you have done, and there is no need for you to change your password or take any other action.”

The security lapse comes days ahead of one of the busiest retail days of the year, the post-Thanksgiving holiday sales day, Black Friday. The issue could scare users away from Amazon, which could be problematic for revenue if the issue impacted a wide number of users just before the heavy shopping day.

Amazon’s vague and non-specific email also sparked criticism from users — including security experts — who accused the company of withholding information. Some said that the correspondence looked like a phishing email, used to trick customers into turning over account information.

Customers in both the U.S., the U.K. and Europe have reported receiving an email from Amazon.

Amazon, as a Washington-based company, is required to inform the state attorney general of data incidents involving 500 state residents or more. Yet, in Europe, where data protection rules are stronger — even in the wake of the recently introduced General Data Protection Regulation (GDPR) — it’s less clear if Amazon needs to disclose the incident.

The UK’s data protection regulator, the Information Commissioner’s Office, told TechCrunch: “Under the GDPR, organizations must assess if a breach should be reported to the ICO, or to the equivalent supervisory body if they are not based in the UK.”

“It is always the company’s responsibility to identify when UK citizens have been affected as part of a data breach and take steps to reduce any harm to consumers,” a spokesperson said. “The ICO will however continue to monitor the situation and cooperate with other supervisory authorities where required.”

To continue earning our trust, technology companies need to be forthcoming and transparent when security problems arise. Not only does that provide victims with the maximum amount of information they can use to recover and avoid future problems, but it also gives users confidence that their data is being responsibly managed no matter what happens.

People fear what they don’t understand, and for now, Amazon is failing to help the public understand what happened.

TechCrunch’s Natasha Lomas contributed to this report.

21 Nov 2018

CV Compiler is a robot that fixes your resume to make you more competitive

Machine learning is everywhere now, including recruiting. Take CV Compiler, a new product by Andrew Stetsenko and Alexandra Dosii. This web app uses machine learning to analyze and repair your technical resume, allowing you to shine to recruiters at Google, Yahoo and Facebook.

The founders are marketing and HR experts who have a combined 15 years of experience in making recruiting smarter. Stetsenko founded Relocate.me and GlossaryTech while Dosii worked at a number of marketing firms before settling on CV Compiler.

The app essentially checks your resume and tells you what to fix and where to submit it. It’s been completely bootstrapped thus far and they’re working on new and improved machine learning algorithms while maintaining a library of common CV fixes.

“There are lots of online resume analysis tools, but these services are too generic, meaning they can be used by multiple professionals and the results are poor and very general. After the feedback is received, users are often forced to buy some extra services,” said Stetsenko. “In contrast, the CV Compiler is designed exclusively for tech professionals. The online review technology scans for keywords from the world of programming and how they are used in the resume, relative to the best practices in the industry.”

The product was born out of Stetsenko’s work at GlossaryTech, a Chrome extension that helps users understand tech terms. He used a great deal of natural language processing and keyword taxonomy in that product and, in turn, moved some of that to his CV service.

“We found that many job applications were being rejected without even an interview, because of the resumes. Apparently, 10 seconds is long enough for a recruiter to eliminate many candidates,” he said.

The service is live now and the team expects the corpus of information to grow and improve over time. Until then, why not let a machine learning robot tell you what you’re doing wrong in trying to get a job? That is, before it replaces you completely.