04 Apr 2018

Amazon introduces new private certificate feature

At the Amazon Summit in San Francisco today, the company announced a new cloud service that enables organizations to create and manage private certificates in the cloud.

While the Summit wasn’t chock full of announcements like the annual re:Invent conference, it did offer some new services like the beefing up the AWS Certificate Manager (ACM) with an all-new Private Certificate Authority (PCA). (Amazon does love its acronyms.)

Private certificates let you limit exactly who has access, giving you more control and hence greater security over them. Private certificates are usually confined to a defined group like a company or organization, but up until now it has been rather complex to create them.

As with any good cloud services, the Private Certificate Authority removes a layer of complexity involved in managing them. “ACM Private CA builds on ACM’s existing certificate capabilities to help you easily and securely manage the lifecycle of your private certificates with pay as you go pricing. This enables developers to provision certificates in just a few simple API calls while administrators have a central CA management console and fine grained access control through granular IAM policies,” Amazon’s Randall Hunt wrote in a blog post announcing the new service.

Screenshot: Amazon

The new feature lets you provision and configure certificates, then import or export them after they’ve been created. The certificates are stored on “AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3),” Hunt wrote. What’s more, admins can access reports to track certificate creation on the system.

The new service is available today and costs $400 per month per certificate authority you set up. For complete pricing details, see the blog post.

04 Apr 2018

White men still make the most money in tech, Hired says

Hired has released its annual pay equity report. Unsurprisingly, white men earn the most at $136,000 per year on average, followed by Asian men making a yearly average of $135,000.

Here are some other stats that highlight the pay discrepancies in tech:

  • Companies offer women 4 percent less than men, on average, for the same role at the same company
  • Black and Latinx women are offered 90 cents for every dollar white men earn
  • LGBTQ+ women are offered more than their non-LGBTQ counterparts

Other fun (sad) facts:

  • San Francisco has the smallest gender wage gap
  • The gender wage gap gets worse as people get older
  • More than half of women know they’ve been paid less than men in similar roles throughout their careers

Unfortunately, the wage gap hasn’t changed since Hired’s 2017 report. The silver lining, however, is that wage equality for tech workers in the U.S. is better than in Toronto, London and Paris. Anyway, you can check out the rest of the sad state of our society here.

04 Apr 2018

White House email domains are at risk of being used in phishing attacks

In the latest episode of how badly some branches of government are at cybersecurity, a new study by the cybersecurity outfit Global Cyber Alliance indicates that 95 percent of the email domains managed by the Executive Office of the President could be spoofed and potentially used in phishing attacks.

Of the domains that are managed by the Office of the President, only the max.gov email address has fully implemented the highest level of defense against spoofing and phishing emails.

Malicious actors often tweak metadata to trick targets into thinking they are receiving email from an official-sounding domain, like whitehouse.gov.

The Domain Message Authentication Reporting & Conformance (DMARC) protocol — which verifies that an email was sent from the correct address (to prevent spoofing) and informs an email recipient that they likely received an email from a faked address — is used to prevent against these attacks. Last October, the Department of Homeland Security required that all federal agencies update their email policies to comply with the protocol.

So far, only seven White House email addresses have taken the basic step of setting up alerts to be notified when their addresses are used in phishing scams, according to the Global Cyber Alliance report. Another 18 haven’t started deploying DMARC.

Without DMARC in place, those email addresses can be spoofed by would-be scammers and recipients would have no idea they were receiving a fake email from a government account.

“Email domains managed by the EOP are crown jewels that criminals and foreign adversaries covet,” said Philip Reitinger, president and CEO of the Global Cyber Alliance, in a statement. “The lack of full DMARC deployment across nearly every EOP email address poses a national security risk that must be fixed. The good news is that four new domains have implemented DMARC at the lowest level, which I hope indicates that DMARC deployment is moving forward.”

Domains that have rolled out the lowest settings of DMARC are WhiteHouse.gov and EOP.gov. Other domains under the purview of the EOP include Budget.gov, OMB.gov, USTR.gov, OSTP.gov.

These security issues aren’t just academic concerns cooked up by bureaucrats to waste time and add red tape to operations. To see how significant these issues can be, look no further than the Atlanta cyberattack, which happened two weeks ago.

The city is still restoring systems after the attack — a ransomware assault that used SamSam malware to encrypt exposed files. Hackers offered to decrypt the files for a (relatively small) ransom (which it looks like the city has not paid, since systems are still offline).

At least Atlanta was able to restore some of its systems (way to back up those files, government officials), but the attack reveals how many critical systems are still vulnerable.

Now consider if hackers were using federal government addresses to distribute malware — that’s a problem everyone should be concerned about.

04 Apr 2018

Facebook restricts APIs, axes old Instagram platform amidst scandals

Facebook is entering a tough transition period where it won’t take chances around data privacy in the wake of the Cambridge Analytica fiasco, CTO Mike Schroepfer tells TechCrunch. That’s why it’s significantly limiting data available from or requiring approval for access to its Events, Groups, and Pages APIs plus Facebook Login. It’s also shutting down search by email or user name and changing its account recovery system after discovering malicious actors were using these to scrape people’s data.

Meanwhile, Instagram will immediately shut down its old platform API that was scheduled for deprecation on July 31st. TechCrunch first reported that developers’ Instagram apps were breaking over the weekend due to a sudden reduction in the API call limit. Instagram refused to comment, leading to developer frustration as their apps that analyze people’s followers and help them grow their audiences stopped working.

Now an Instagram spokesperson tells TechCrunch that “Effective today, Instagram is speeding up the previously announced deprecation of the Instagram API Platform” as part of Facebook’s work to protect people’s data. The APIs for follower lists, relationships, and commenting on public content will cease to function immediately. The December 11th, 2018 deprecation of public content reading APIs and the 2020 deprecation of basic profile info APIs will happen as scheduled, but it’s implemented rate limit reductions on them now.

The announcements come alongside Facebook’s admission that up to 87 million users had their data improperly attained by Cambridge Analytica, up from early estimates of 50 million. These users will see a warning atop their News Feed about what happened, what they should do, and see surfaced options for removing other apps they gave permissions to. Facebook CEO Mark Zuckerberg plans to take questions about today’s announcements during at 1:30pm Pacific conference call.

Regarding the Facebook APIs, here’s the abbreviated version of the changes:

  • Events API will require approval for use in the future, and developers will no long be able to pull guest lists or post sto the event wall.
  • Groups API will require approval from Facebook and a Group admin, and developers won’t be able to pull member lists or the names and photos associated with posts
  • Pages API will only be available to developers providing “useful services”, and all future access will require Facebook approval
  • Facebook Login use will require a stricter review process and apps won’t be able to pull users personal information or activity, plus they’ll lose access if after 3 months of non-use.
04 Apr 2018

Facebook admits Cambridge Analytica hijacked data on up to 87M users

Facebook will warn 87 million users, mostly in the U.S, that their data “may have been improperly shared with Cambridge Analytica by apps that they or their friends used”, the company just announced. Facebook CTO Mike Schoepfer tells TechCrunch that Facebook will warn these users with a notice atop the News Feed with information about what data of theirs might have been attained, and what they should do now. It will also show its new bulk app permissions removal tool atop the feed.

Schroepfer says that 87 million is the maximum number of users impacted, up from initial reports of 50 million, as Facebook isn’t positive of how many people had their data misused. It likely doesn’t want to low-ball and have to revise the number upward later, as it did when it initially reported the Russian election interference ads were seen by 10 million users and later had to admit to congress it was actually 126 million when organic posts were included. Mark Zuckerberg plans to take questions from reporters about the changes during a 1:30pm Pacific conference call on the subject.

The changes come as part of a slew of announcements in the wake of the Cambridge Analytica scandal including new restrictions on Facebook API use and the immediate shut down of the old Instagram API that was slated for July, but which started to break developers’ apps this week. Facebook is now undergoing a deep audit of app developers that pulled a lot of data or that look suspicious, and Schroepfer promises Facebook will make further disclosures if it finds any situations similar to the Cambridge Analytica fiasco.

Facebook is trying fix its broken data privacy after a developer named Dr. Aleksandr Kogan used the platform to administer a personality test app that collected data about participants and their friends. That data was then passed to Cambridge Analytica where it may have been leveraged to optimize political campaigns including that of 2016 presidential candidate Donald Trump and the Brexit vote, allegations which the company itself vehemently denies. Regardless of how the data was employed to political ends, that lax data sharing was enough to ignite a firestorm around Facebook’s privacy practices.

Following the Cambridge Analytica revelations, the company’s stock dropped precipitously, wiping more than $60 billion off its market capitalization from its prior period of stable growth. At the time of writing, Facebook was trading at $153.56.

Facebook’s core leadership was slow to respond to the explosion of negative attention, though Zuckerberg and Sandberg broke that silence with a flurry of media appearances, interviews and print ads. The company also came under the scrutiny of Congress once more and that pressure, which came from subcommittees in both the House and Senate and from both political parties, appears to have paid off. Zuckerberg is expected to testify before the House Energy and Commerce Committee, just one of the several powerful committees calling for him, on April 11.

While it’s certainly unfortunate that it took mishandling user data on a large scale to do so, the incident has become the straw that broke the Facebook camel’s back when it comes to privacy — and that appears to be catalyzing change. Schroepfer tells TechCrunch Facebook is now lifting every rock to find any other vulnerabilities that could be used to illicitly access or steal people’s information. Now we’re getting changes that should have been in place years ago that could make Facebook a safer place to network for users concerned about how the company handles their private data.

04 Apr 2018

AWS adds automated point-in-time recovery to DynamoDB

One of the joys of cloud computing is handing over your data to the cloud vendor and letting them handle the heavy lifting. Up until now that has meant they updated the software or scaled the hardware for you. Today, AWS took that to another level when it announced Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR).

With this new service, the company lets you simply enable the new backup tool, and the backup happens automatically. Amazon takes care of the rest, providing a continuous backup of all the data in in your DynamoDB database.

But it doesn’t stop there, it lets the backup system act as a recording of sorts. You can rewind your data set to any point in time in the backup to any time with “per second granularity” up to 35 days in the past. What’s more, you can access the tool from the AWS Management Console, an API call or via the AWS Command Line Interface (CLI).

Screenshot: Amazon

“We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict,” Amazon’s Randall Hunt wrote in the blog post announcing the new feature.

If you’re concerned about the 35 day limit, you needn’t be as the system is an adjunct to your regular on-demand backups, which you can keep for as long as you need.

Amazon’s Chief Technology Officer, Werner Vogels, who introduced the new service at the Amazon Summit in San Francisco today, said it doesn’t matter how much data you have. Even with a terabyte of data, you can make use of this service. “This is a truly powerful mechanism here,” Vogels said.

The new service is available in various regions today. You can learn about regional availability and pricing options here.

04 Apr 2018

AWS adds automated point-in-time recovery to DynamoDB

One of the joys of cloud computing is handing over your data to the cloud vendor and letting them handle the heavy lifting. Up until now that has meant they updated the software or scaled the hardware for you. Today, AWS took that to another level when it announced Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR).

With this new service, the company lets you simply enable the new backup tool, and the backup happens automatically. Amazon takes care of the rest, providing a continuous backup of all the data in in your DynamoDB database.

But it doesn’t stop there, it lets the backup system act as a recording of sorts. You can rewind your data set to any point in time in the backup to any time with “per second granularity” up to 35 days in the past. What’s more, you can access the tool from the AWS Management Console, an API call or via the AWS Command Line Interface (CLI).

Screenshot: Amazon

“We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict,” Amazon’s Randall Hunt wrote in the blog post announcing the new feature.

If you’re concerned about the 35 day limit, you needn’t be as the system is an adjunct to your regular on-demand backups, which you can keep for as long as you need.

Amazon’s Chief Technology Officer, Werner Vogels, who introduced the new service at the Amazon Summit in San Francisco today, said it doesn’t matter how much data you have. Even with a terabyte of data, you can make use of this service. “This is a truly powerful mechanism here,” Vogels said.

The new service is available in various regions today. You can learn about regional availability and pricing options here.

04 Apr 2018

Police say shooter’s anger over YouTube policies ‘appears to be the motive’

The woman who attacked YouTube’s headquarters in San Bruno yesterday may have been acting out of anger over the site’s content policies. “At this point in the investigation it is believed that the suspect was upset with the policies and practices of YouTube . This appears to be the motive for this incident,” said San Bruno Police Chief Ed Barberini at a press conference Wednesday morning.

The police identified Nasim Najafi Aghdam, a San Diego resident, as the shooter; police said they believed she acted alone. Her family reported her missing on March 31.

Aghdam was a frequent and outspoken user of YouTube and had posted dozens of videos on various topics and in several languages. Her personal website and videos make it clear she believed the company deliberately limited the reach of some users, including herself.

She also had videos on other sites, including one called “Discrimination Filteration On Youtube” (sic) on vid.me, and the site includes images of her YouTube dashboard showing declines in views over the course of 2016.

Specifically she appears to have been concerned that she was being discriminated against for her veganism, a practice about which she felt very strongly, not on the basis of (for example) her race, gender or other traits. She asserted that one video had been age-restricted by an anti-vegan YouTube employee, and that subsequently the company “began filtering my videos to reduce views & revenue to suppress me!”

How this frustration with YouTube’s policies grew into the choice to commit yesterday’s attack is not clear and police did not speculate; the investigation is just starting.

The police did say, however, that they had had some kind of contact with Aghdam earlier that day, though they did not mention what that was.

She reportedly accessed the YouTube building through the parking garage, from which she entered the courtyard and began firing indiscriminately. She struck three people before turning the firearm, a licensed and registered 9mm handgun, on herself.

“At this point we have no indication that she was selecting individuals to fire on,” Barberini said. “There’s absolutely no link or relationship that we’ve identified between our suspect and anybody who was at the scene at the time of the incident. We know that she was upset with YouTube; whether that rises to the level of terrorism will hopefully be determined in the coming weeks.”

He also praised YouTube and its employees for their quick response:

“YouTube’s response to this critical incident was also extraordinary. They were extremely prepared to offer services to all of their employees in a swift and timely manner. This would have been much more challenging for us without all of their assistance.”

The company had worked with police specifically on policies and measures for dealing with an active shooter. YouTube CEO Susan Wojcicki addressed the shooting in a tweet yesterday:

04 Apr 2018

Police say shooter’s anger over YouTube policies ‘appears to be the motive’

The woman who attacked YouTube’s headquarters in San Bruno yesterday may have been acting out of anger over the site’s content policies. “At this point in the investigation it is believed that the suspect was upset with the policies and practices of YouTube . This appears to be the motive for this incident,” said San Bruno Police Chief Ed Barberini at a press conference Wednesday morning.

The police identified Nasim Najafi Aghdam, a San Diego resident, as the shooter; police said they believed she acted alone. Her family reported her missing on March 31.

Aghdam was a frequent and outspoken user of YouTube and had posted dozens of videos on various topics and in several languages. Her personal website and videos make it clear she believed the company deliberately limited the reach of some users, including herself.

She also had videos on other sites, including one called “Discrimination Filteration On Youtube” (sic) on vid.me, and the site includes images of her YouTube dashboard showing declines in views over the course of 2016.

Specifically she appears to have been concerned that she was being discriminated against for her veganism, a practice about which she felt very strongly, not on the basis of (for example) her race, gender or other traits. She asserted that one video had been age-restricted by an anti-vegan YouTube employee, and that subsequently the company “began filtering my videos to reduce views & revenue to suppress me!”

How this frustration with YouTube’s policies grew into the choice to commit yesterday’s attack is not clear and police did not speculate; the investigation is just starting.

The police did say, however, that they had had some kind of contact with Aghdam earlier that day, though they did not mention what that was.

She reportedly accessed the YouTube building through the parking garage, from which she entered the courtyard and began firing indiscriminately. She struck three people before turning the firearm, a licensed and registered 9mm handgun, on herself.

“At this point we have no indication that she was selecting individuals to fire on,” Barberini said. “There’s absolutely no link or relationship that we’ve identified between our suspect and anybody who was at the scene at the time of the incident. We know that she was upset with YouTube; whether that rises to the level of terrorism will hopefully be determined in the coming weeks.”

He also praised YouTube and its employees for their quick response:

“YouTube’s response to this critical incident was also extraordinary. They were extremely prepared to offer services to all of their employees in a swift and timely manner. This would have been much more challenging for us without all of their assistance.”

The company had worked with police specifically on policies and measures for dealing with an active shooter. YouTube CEO Susan Wojcicki addressed the shooting in a tweet yesterday:

04 Apr 2018

Burst breaks you out of your filter bubble on Reddit

One of the many accusations about the trouble with Facebook – amid its data privacy concerns and enabling of Russian trolls – is that it keeps people locked into echo chambers where access to other viewpoints are limited. This is a common problem across social media, people are beginning to realize. A new app called Burst is experimenting with how to break people out of their bubbles. But its focus is not on Facebook, as it turns out – it’s on Reddit.

In a way, Reddit has a problem with filter bubbles too, given that users choose which subreddits (communities) to subscribe to. That means than in addition to subscribing to something like r/politics, you may also subscribe to r/liberal or r/conservative, where you’ll hear your own viewpoints echoed and reinforced.

What Burst does is combine everyone’s comments and insights into one place. It works by finding everywhere a post has been re-shared across Reddit, then merges all the comments together.

The app is organized like Reddit itself with sections for Hot, Top, Rising, New and Controversial as well as access to your own subreddits when you sign in. To view the comments on a post, you swipe to the left. And to upvote or downvote, you use Force Touch.

Explains the app creator Alex Duckmanton, currently a product designer in Sydney, Burst has exposed him to different ideas.

“In my own use of Burst while testing, I’ve found that even though I see a lot more stuff I don’t agree with, seeing it helps me empathise with other perspectives. It’s my hope that this will help tone down some of the more toxic aspects of political discussion online,” he says.

While political discussions are often a hotbed for trolling and the general social media-induced lack of empathy, an app like Burst can help in other areas, too. For example, notes Duckmanton, a post about the new iPhone might appear in both r/iphone and r/android, in addition to r/apple or r/technology. This allows you to see a wider array of discussions about the iPhone news, than if you only read the comments from one community.

Duckmanton says he began to build Burst when he had some time off from work in the second-half of last year because he’s been concerned about the impact of filter bubbles on social media platforms.

“Filter bubbles aren’t being addressed in a meaningful way by any of the major social platforms,” he says. “The typical response of banning problem users or removing content is a brute-force approach and isn’t long-term or scalable. Outside of that, users are expected to self-manage the content they consume in order to have a more balanced view. It’s not reasonable to expect that of people, and it certainly won’t happen for people who aren’t already engaged in news and politics,” Duckmanton continues.

“We need to create products in such a way that filter bubbles cannot exist because their design fundamentally prevents them,”he adds.

The project is entirely self-funded and is currently supported by ads with the option to remove them via an in-app purchase.

Now that the app is out in the wild, Duckmanton says he’ll see how the community reacts before deciding if he will transition to working on Burst full-time. If it were to take off, however, he imagines a future where a Burst-like app could be built for other platforms – like Twitter. But in the immediate-term, nothing like that is in the works.

Burst is a free download for iOS. A link to sign up for an Android beta is here.