Year: 2019

13 Nov 2019

The Math of Sisyphus

“There is but one truly serious question in philosophy, and that is suicide,” wrote Albert Camus in The Myth of Sisyphus. This is equally true for a human navigating an absurd existence and an artificial intelligence navigating a morally insoluble situation.

As AI-powered vehicles take the road, questions about their behavior are inevitable — and the escalation to matters of life or death equally so. This curiosity often takes the form of asking whom the car should steer for should it have no choice but to hit one of a variety of innocent bystanders. Men? Women? Old people? Young people? Criminals? People with bad credit?

There are a number of reasons this question is a silly one, yet at the same time a deeply important one. But as far as I’m concerned, there is only one real solution that makes sense: when presented with the possibility of taking a life, the car must always first attempt to take its own.

The trolley non-problem

First, let’s get a few things straight about the question we’re attempting to answer.

There is unequivocally an air of contrivance to the situations under discussion. That’s because they’re not plausible real-world situations but mutations of a venerable thought experiment often called the “Trolley Problem.” The most familiar version dates to the ’60s, but versions of it can be found going back to discussions of utilitarianism, and before that in classical philosophy.

The problem goes: A train car is out of control, and it’s going to hit a family of five who are trapped on the tracks. Fortunately, you happen to be standing next to a lever that will divert the car to another track… where there’s only one person. Do you pull the switch? Okay, but what if there are ten people on the first track? What if the person on the second one is your sister? What if they’re terminally ill? If you choose not to act, is that in itself an act, leaving you responsible for those deaths? The possibilities multiply when it’s a car on a street: for example, what if one of the people is crossing against the light — does that make it all their fault? But what if they’re blind?

And so on. It’s a revealing and flexible exercise that makes people (frequently undergrads taking Intro to Philosophy) examine the many questions involved in how we value the lives of others, how we view our own responsibility, and so on.

But it isn’t a good way to create an actionable rule for real-life use.

After all, you don’t see convoluted moral logic on signs at railroad switches instructing operators on an elaborate hierarchy of the values of various lives. This is because the actions and outcomes are a red herring; the point of the exercise is to illustrate the fluidity of our ethical system. There’s no trick to the setup, no secret “correct” answer to calculate. The goal is not even to find an answer, but generate discussion and insight. So while it’s an interesting question, it’s fundamentally a question for humans, and consequently not really one our cars can or should be expected to answer, even with strict rules from its human engineers.

A self-driving car can no more calculate its way out of an ethical conundrum than Sisyphus could have calculated a better path by which to push his boulder up the mountain.
And it must also be acknowledged that these situations are going to be vanishingly rare. Most of the canonical versions of this thought experiment — five people versus one, or a kid and an old person — are so astronomically unlikely to occur that even if we did find a best method that a car should always choose, it’ll only be relevant once every trillion miles driven or so. And who’s to say whether that solution will be the right one in another country, among people with different values, or in 10 or 20 years?

No matter how many senses and compute units a car has, it can no more calculate its way out of an ethical conundrum than Sisyphus could have calculated a better path by which to push his boulder up the mountain. The idea is, so to speak, absurd.

We can’t have our cars attempting to solve a moral question that we ourselves can’t. Yet somehow that doesn’t stop us from thinking about it, from wanting an answer. We want to somehow be prepared for the situation even though it may never arise. What’s to be done?

Implicit and explicit trust

The entire self-driving car ecosystem has to be built on trust. That trust will grow over time, but there are two aspects to be considered.

The first is implicit trust. This is the kind of trust we have in the cars we drive today: that despite being one-ton metal missiles propelled by a series of explosions and filled with high octane fuel, they won’t blow up, fail to stop when we hit the brakes, spin out when we turn the wheel, and so on. That we trust the vehicle to do that is the result of years and years of success on the part of car manufacturers. Considering their complexity, cars are among the most reliable machines ever made. That’s been proven in practice and most of the time, we don’t even think of the possibility of the brakes not catching when the pedal is depressed.

You trust your personal missile to work the way you trust a fridge to stay cold. Let’s take a moment to appreciate how amazing that is.

Self-driving cars, however, introduce new factors, unproven ones. Their proponents are correct when they say that autonomous vehicles will revolutionize the road, reduce traffic deaths, shorten commutes, and so on. Computers are going to be much better drivers than us in countless ways. They have superior reflexes, can see in all directions simultaneously (not to mention in the dark, and around or through obstacles), communicate and collaborate instantly with nearby vehicles, immediately sense and potentially fix technical problems… the list goes on.

But until these amazing abilities lose their luster and become just more pieces of the transportation tech infrastructure that we trust, they’ll be suspect. That part we can’t really accelerate except, paradoxically, by taking it slow and making sure no highly visible outlier events (like that fatal Uber crash) arrest the zeitgeist and set back that trust by years. Make haste slowly, as they say. Few people remember anti-lock brakes saving their lives, though it’s probably happened to several people reading this right now — it just quietly reinforced our implicit trust in the vehicle. And no one will remember when their car improved their commute by five minutes with a hundred tiny improvements. But they sure do remember that Toyotas killed dozens with bad software that locked the car’s accelerator.

The second part of that trust is explicit: something that has to be communicated, learned, something of which we are consciously aware.

For cars there aren’t many of these. The rules of the road differ widely and are flexible — some places more than others — and on ordinary highways and city streets we operate our vehicles almost instinctively. When we are in the role of pedestrian, we behave as a self-aware part of the ecosystem — we walk, we cross, we step in front of moving cars because we assume the driver will see us, avoid us, stop before they hit us. This is because we assume that behind the wheel of every car is an attentive human who will behave according to the rules we have all internalized.

Nevertheless, we have signals, even if we don’t realize we’re sending or receiving them; how else can you explain how you know that truck up there is going to change lanes fives seconds before it turns its blinker on? How else can you be so sure a car isn’t going to stop, and hold a friend back from stepping into the crosswalk? Just because we don’t quite understand it doesn’t mean we don’t exert it or assess it all the time. Making eye contact, standing in a place implying the need to cross, waving, making space for a merge, short honks and long honks… it’s a learned skill, and a culture or even city-specific one at that.

Cold blooded

With self-driving cars there is no humanity in which to place our trust. We trust other people because they’re like us; computers are not like us.

In time, autonomous vehicles of all kinds will become as much a part of the accepted ecosystem as automated lights and bridges, metered freeway entrances, parking monitoring systems, and so on. Until that time we will have to learn the rules by which autonomous vehicles operate, both through observation and straightforward instruction.

Some of these habits will be easily understood; for instance, maybe autonomous vehicles will never, ever try to make a U-turn by crossing a double yellow line. I try not to myself, but you know how it is. I’d rather do that than go an extra three blocks to do it legally. But an AV will perhaps scrupulously adhere to traffic laws like that. So there’s one possible rule.

Others might not be quite so hard and fast. Merging and lane changes can be messy, but perhaps it will be the established pattern that AVs will always brake and join the line further back rather than try to move up a spot. This requires a little more context and the behavior is more adaptive, but it’s still a relatively simple pattern that you can perceive and react to, or even exploit to get ahead a bit (please don’t).

It’s important to note that, like the trolley problem “solutions,” there’s no huge list of car behaviors that says, always drop back when merging, always give the right of way, never this, this if that, etc. Just as our decision to switch or not switch tracks proceeds from a higher-order process of morality in our minds, these autonomous behaviors will be the natural result of a large set of complicated evaluations and decision-making processes that weigh hundreds of factors like positions of nearby cars, speed, lane width, etc. But I think they’ll be reliable enough in some ways and in some behaviors that there will definitely be a self-driving “style” that doesn’t deviate too much.

Although few if any of these behaviors are likely to be dangerous in and of themselves, it will be helpful to understand them if you are going to be sharing the road with them. Imperfect knowledge is how we get accidents to begin with. Establishing an explicit trust relationship with self-driving vehicles is part of the process accepting them into our everyday lives.

But people naturally want to take things to their logical ends, even if those ends aren’t really logical. And as you consider the many ways AVs will drive and how they will navigate certain situations, the “but what if…” scenarios naturally get more and more dire and specific as variables approach limits, and ultimately you arrive at the AV equivalent of the trolley problem that we started with. What happens when the car has to make a choice between people?

It’s not that anyone even thinks it will happen to them. What they want to know, as a prerequisite to trust, is that the system is not unprepared, and that the prepared response is not one that puts them in danger. People don’t want to be the victim of the self-driving car’s logic, even theoretically — that would be an impassible barrier to trust.

Because whatever the scenario, whoever it “chooses” between, one of those parties is undeniably the victim. The car got on the road and, following its ill logic to the bitter end, homed in on and struck this person rather than that one.

If neither of the people in this AV-trolley problem can by any reasonable measure be determined to be the “correct” one to choose, especially from their perspective (which must after all be considered), what else is there to do? Well, we have to remember that there’s one other “person” involved here: the car itself.

Is it self-destruction if you don’t have a self?

My suggestion is simply that it be made a universal policy that should a self-driving car be put in a situation where it is at serious risk of striking a person, it must take whatever means it can to avoid it — up to and including destroying itself, with no consideration for its own “life.” Essentially, when presented with the possibility of murder, an autonomous vehicle must always prefer suicide.

When presented with the possibility of murder, an autonomous vehicle must always prefer suicide.
It doesn’t have to detonate itself or anything. It just needs to take itself out of the action, and a robust improvisational engine can be produced to that end just as well as for avoiding swerving trucks, changing lanes suddenly and any other behavior. There are telephone poles, parked cars, trees — take your pick; any of these things will do as long as they stop the car.

The objection, of course, is that there is likely to be a person inside the self-driving car. Yes — but this person has consented to the inherent risk involved, while the people on the street haven’t. While much of the moral calculus of the trolley problem is academic, this bit actually makes a difference.

Consenting to the risks of using a self-driving system means the occupant is acknowledging the possibility that should such a situation arise, however remote the possibility, they would be the person who may be the victim of it. They are the ones who will explicitly consent to trust their lives to the logic of the self-driving system. Furthermore, as a practical consideration, the occupant is so to speak on the soft side of the car.

As we’ve already established, it’s unlikely a car will ever have to do this. But what it does is provide a substantial and easily understood answer when someone asks the perfectly natural question of what an autonomous vehicle will do when it is careening toward a pedestrian. Simple: it will do its level best to destroy itself first.

There are extremely specific and dire situations that there will never be a solution to as long as there are moving cars and moving people, and self-driving vehicles are no exception to that. You’ll never run out of imaginary scenarios for any system, human or automated, to fail. But it is in order to reduce the number of such scenarios and help establish trust, not to render tragedy impossible, that every self-driving car should robustly and provably prefer its own destruction to that of a person outside itself.

We are not aiming for a complete solution, just an intuitive one. Self-driving cars will, say, always brake to merge, never cross a double yellow in normal traffic, and so on and so forth — and will crash themselves rather than hit a pedestrian. Regardless of the specifics and limitations of the model, that’s a behavior anyone can understand, including those who must consent to it.

Although even the most hard-bitten existentialist would be unlikely to support a systematic framework for suicide, it makes a difference when “suicide” is more likely to mean a fender bender and damage to one’s pocket rather than the death or injury of another. To destroy oneself is different when there is no self to destroy, and, practically speaking, the risk to passengers, equipped with airbags and seat belts, is far less than the risk to pedestrians.

How exactly would this all be accomplished in practice? Well, it could of course be required by transportation authorities, like seat belts and other safety measures. But unlike seat belts, the proprietary and complex inner workings of an autonomous system aren’t easily verifiable by non-experts. There are ways, but we should be wary of putting ourselves in a position where we have to trust not a technology but the company that administrates it. Either can fail us, but only one can betray us.

We should be wary of putting ourselves in a position where we have to trust not a technology but the company that administrates it. Either can fail us, but only one can betray us.
Perhaps there will be no need to rely on regulators, though: No brand of car wants to have its vehicles associated with running down a pedestrian. Today there are probably more accidents in Civics and Camrys than anything else, but no one thinks that makes them dangerous to drive — it just means more people drive them, and people make mistakes like anyone else.

On the other hand, if an automaker’s brand of self-driving vehicle hits someone, it’s obvious (and right) that the company will bear the blame. And consumers will see that — for one thing, it will be widely reported, and for another, there will probably be highly robust tracking of this kind of thing, including footage and logs from these accidents.

If automakers want to avoid pedestrian strikes and fatalities, they will incorporate something like this self-destruction protocol in their cars as a last line of defense, even if it leads to a net increase in autonomous collisions. It would be much preferable to be known as having a cautious AI than a killer one. So I think that, like other safety mechanisms, this or something like it will be included and, I hope, publicized on every car not because it’s required, but because it makes sense.

People deserve to know how things like self-driving cars work, even if few people on the planet can truly understand the complex computations and algorithms that govern them. They should, like regular cars, be able to be understood at a surface level. This case of understanding them at an extreme end of their behavior is not one that will be relevant every day, but it is a crucial one because it is something that matters to us at a gut level: knowing that these cars aren’t evaluating us as targets via mysterious and fundamentally inadequate algorithms.

To repurpose Camus: “These are facts the heart can feel; Yet they call for careful study before they become clear to the intellect.” Start with a simple solution we feel to be just and work backward from there. And soon — because this is no longer a thought experiment.

13 Nov 2019

Micromobility’s next big opportunities

Launching and operating shared bikes and scooters has lost its novelty. Worldwide, numerous companies are operating shared micromobility services — so many that the industry is well into a consolidation phase.

In Latin America, Grow Mobility formed as part of a merger between micromobility providers Grin and Yellow. In the U.S., Bird acquired Scoot. And, while not a traditional consolidation, Lime and Uber have partnered to include Lime’s scooters within the Uber app.

Meanwhile, we now have a handful of players operating in the direct-to-consumer model; Unagi, Boosted and even Bird has started selling direct to consumers.

Despite the over-saturation of the market, there are still opportunities for new players. Currently, there are two key areas that have yet to see a lot of action and are therefore ripe for disruption.

Those opportunities include creating a software ecosystem on top of bikes and scooters and improving unit economics by focusing on batteries.

As you may remember, business and mobility analyst Horace Dediu recently told me these micromobility vehicles have an opportunity to also be software hubs. In fact, he said it’s where he expects bigger players like Google and Apple to enter the space.

Already, at least one startup is taking steps to become the operating system for micromobility vehicles. Tortoise, a startup founded by former Uber executive Dmitry Shevelenko, is pursuing autonomous repositioning of scooters.

13 Nov 2019

Daily Crunch: Meet Apple’s new MacBook Pro

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. MacBook Pro 16” first impressions: Return of the Mack

Over the past few years, Apple’s MacBook game had begun to suffer from complacency — as problems with the models started to mount (unreliable keyboards, low RAM ceilings and anemic graphics offerings), the once insurmountable advantage that the MacBook had compared to the rest of the notebook industry started to show signs of dwindling.

So the new 16” MacBook Pro is an attempt to rectify most, if not all, of the major complaints of its most loyal, and vocal, users.

2. Google to offer checking accounts in partnership with banks starting next year

Google is calling the project “Cache,” and it’ll partner with banks and credit unions to offer the checking accounts, with the banks handling all financial and compliance activities related to the accounts.

3. A US federal court finds suspicionless searches of phones at the border is illegal

A federal court has ruled that the government is not allowed to search travelers’ phones or other electronic devices at the U.S. border without first having reasonable suspicion of a crime. The case was brought by 11 travelers — 10 of whom are U.S. citizens — with support from the American Civil Liberties Union and the Electronic Frontier Foundation.

4. Convoy raises $400 million to expand its on-demand trucking platform

Convoy co-founders Dan Lewis and Grant Goodale set out in 2015 to modernize freight brokerage, a fragmented and oftentimes analog business that matches loads from shippers with truckers. The company has gone from hundreds of loads per week in 2016 to tens of thousands per week across the U.S.

5. The AI stack that’s changing retail personalization

To be forward-looking, brands and retailers are turning to startups in image recognition and machine learning to know, at a very deep level, what each consumer’s current context and personal preferences are and how they evolve. (Extra Crunch membership required.)

6. These sneakers vibrate

Invented by a man named Brock Seiler, and led by former Beats by Dre CEO Susan Paley, DropLabs aims to take audio to a whole new level by syncing music, movies and other audio to shoes that vibrate the soles of your feet.

7. Elon Musk picks Berlin for Tesla’s Europe Gigafactory

Musk said Tesla is also going to create an engineering and design center in Berlin because “I think Berlin has some of the best art in the world.”

13 Nov 2019

Apple Music introduces Replay, a playlist of your top songs of the year

Apple Music is taking on Spotify with the launch of a new feature, Apple Music Replay, that will allow subscribers to take a look back at their favorite music from 2019. The feature is similar in some ways to Spotify’s popular year-end review, known as Wrapped, but Apple’s version is more than just an annual summary — it’s an ongoing experience.

With Apple Music Replay, subscribers will get a playlist of their top songs from 2019, plus playlists for every year you’ve subscribed to Apple Music, retroactively. These can be added to your Apple Music Library, so you can stream them at any time, even when offline. Like any playlist, your Apple Music Replay can also be shared with others, allowing you to compare top songs with friends, for example, or post to social media.

But while Spotify’s Wrapped is more of an annual retrospective, Apple Music Replay will continue to be updated all year long, evolving as your musical tastes and interests do throughout the year. The playlist and its associated data insights will be updated on Sundays to reflect subscribers’ latest listening activity, says Apple.

That makes the playlist more of a compilation of favorites, which continues to add value throughout the year — not just at the end. And when January rolls around, the 2020 Replay playlist will be a blank slate to fill with your favorites from Apple Music’s catalog of 60 million tracks.

Apple Music Replay is available from the Apple Music app across platforms, including via the web at replay.music.apple.com.

Beyond being fun to use, the addition of Apple Music Replay aims to help Apple better compete against Spotify, which leverages streaming data to create numerous personalized playlists and features for its users and subscribers. Spotify recently reported better-than-expected earnings and said it turned a profit, as it reached 113 million premium subscribers by September’s end. Apple, meanwhile, had 60 million paying subscribers as of late June.

13 Nov 2019

Atlassian expands Jira Service Desk beyond IT teams

Atlassian today announced a set of new templates and workflows for Jira Service Desk that were purpose-built for HR, legal and facilities teams. Service Desk started six years ago as a version of Jira that was mostly meant for IT departments. Atlassian, however, found that other teams inside the companies that adopted it started to use it as well, including various teams at Twitter and Airbnb, for example. With today’s update, it’s now making it easier for these teams, at least in legal, HR and facilities, to get started with Jira Service Desk without having to customize the product themselves.

“Over the last six years, one of the observations that we’ve made was that we need to provide really good services — the idea that we can provide great services to employees is really something that is really on the rise,” said Edwin Wong, the head of the company’s IT products. “I think in the past, maybe we were a bit more forgiving in terms of what employees expected from services departments. But today you’re just so used to great experiences in your consumer life and when you come to work, you expect the same.”

But lots of service teams, he argues, didn’t have the tools to provide this experience, yet they were looking for tools to streamline their workflows (think onboarding for HR teams, for example) and to move from manual processes to something more automated and modern. Jira was already flexible enough to allow them to do this, but the new set of templates now codifies these processes for them.

Wong stressed this isn’t just about tracking but also managing work across teams and providing them a more centralized hub for information. “One of the big challenges that we’ve seen from many of the customers that we’ve spoken to is the challenge of just figuring out where to go when you want something,” he said. “When I have a new employee, where do I go to ask for a new laptop? Is that the same process as telling my facilities teams that perhaps there is an issue with a bathroom?”

Atlassian is starting with these three templates because that’s where it saw the most immediate need. Over time, I’m sure we’ll see the company get into other verticals as well.

13 Nov 2019

What AI startups need to achieve before VCs will invest

Funding of artificial intelligence-focused companies reached approximately $9.3 billion in the U.S. in 2018, an amount that will continue to rise as the transformative impact of AI is realized. That said, not every AI startup has what it takes to secure an investment and scale to success.

So, what do venture capitalists look for when considering an investment in an AI company?

What we look for in all startups

Some fundamentals are important in any of our investments, AI or otherwise. First, entrepreneurs need to articulate that they are solving a large and important problem. It may sound strange, but finding the right problem can be more difficult than finding the right solution. Entrepreneurs need to demonstrate that customers will be willing to switch from what they’re currently using and pay for the new solution.

The team must demonstrate their competence in the domain, their functional skills and above all, their persistence and commitment. The best ideas likely won’t succeed if the team isn’t able to execute. Setting and achieving realistic milestones is a good way to keep operators and investors aligned. Successful entrepreneurs need to show why their solution offers superior value to competitors in the market — or, in the minority of cases where there is an unresolved need — why they’re in the best position to solve it.

In addition, the team must clearly explain how their technology works, how it differs and is advantageous relative to existing competitors and must explain to investors how that competitive advantage can be sustained.

For AI entrepreneurs, there are additional factors that must be addressed. Why? It is fairly clear that we’re in the early stages of this burgeoning industry which stands to revolutionize sectors from healthcare to fintech, logistics to transportation and beyond. Standards have not been settled, there is a shortage of personnel, large companies are still struggling with deployment, and much of the talent is concentrated in a few large companies and academic institutions. In addition, there are regulatory challenges that are complex and growing due to the nature of the technology’s evolutionary aspect.

Here are five things we like to see AI entrepreneurs demonstrate before making an investment:

Demonstrate mastery over their data and its value: AI needs big data to succeed. There are two models: companies can either help customers add value to their data or build a data business using AI. In either case, startups must demonstrate that the data is reliable, secure and compliant with all regulatory rules. They must also demonstrate that AI is adding value to their own data — it must explain something, derive an explanation, identify important trends, optimize or otherwise deliver value.

With the sheer abundance of data available for companies to collect today, it’s imperative that startups have an agile infrastructure in place that allows them to store, access and analyze this data efficiently. A data-driven startup must become ever more responsive, proactive and consistent over time.

AI entrepreneurs should know that while machine learning can be applied to many problems, it may not always yield accurate predictions in every situation. Models may fail for a variety of reasons, one of which is inadequate, inconsistent or variable data. Successful mastery of the data demonstrates to customers that the data stream is robust, consistent and that the model can adapt if the data sources change.

Entrepreneurs can better address their customer needs if they can demonstrate a fast, efficient way to normalize and label the data using meta tagging and other techniques.

Remember that transparency is a virtue: There is an increased need in certain industries — such as financial services — to explain to regulators how the sausage is  made, so to speak. As a result, entrepreneurs must be able to demonstrate explainability to show how the model arrived at the result (for example, a credit score). This brings us to an additional issue about accounting for bias in models and, here again, the entrepreneur must show the ability to detect and correct bias as soon as they are found.

13 Nov 2019

Rocket Lab’s new ‘Rosie the Robot’ speeds up launch vehicle production — by a lot

Rocket launch startup Rocket Lab is all about building out rapid-response space-launch capabilities, and founder/CEO Peter Beck is showing off its latest advancement in service of that goal: A room-sized manufacturing robot named “Rosie.”

Rosie is tasked with processing the carbon composite components of Rocket Lab’s Electron launch vehicle. That translates to basically getting the rocket flight-ready, and there’s a lot involved in that — it’s a process that normally can take “hundreds of hours,” according to Beck. So how fast can Rosie manage the same task?

“We can produce one launch vehicle in this machine every 12 hours,” Beck says in the video. That includes “every bit of marking, every bit of machining, every bit of drilling,” he adds.

This key new automation tool essentially takes something that was highly bespoke and manual and turns it into something eminently repeatable and expedited, which is a necessary ingredient if Rocket Lab is ever to accomplish its goal of providing high-frequency launches to small satellite customers with very little turnaround time. The company’s New Zealand launch facility recently landed an FAA license that helps sketch out the extent of its ambition, as it’s technically cleared to launch rockets as often as every 72 hours.

In addition to innovations like Rosie, Rocket Lab uses 3D printing for components of its launch vehicle engines that result in single-day turnaround for production, versus weeks using more traditional methods. It’s also now working on an ambitious plan for rocket recovery, which should help further with providing high-frequency launch capabilities as it’ll mean they don’t have to build entirely new launch vehicles for every mission.

13 Nov 2019

GitHub launches a mobile app, smarter notifications and improved code search

At its annual Universe conference today, Microsoft -owned GitHub announced a couple of new products, as well as the general availability of a number of tools that developers have been able to test for the last few months. The two announcements that developers will likely be most interested in are the launch of GitHub’s first native mobile app and an improved notifications experience. But in addition to that, it is also taking GitHub Actions, the company’s workflow automation and CI/CD solution, as well as GitHub Packages, out of beta. GitHub is also improving its code search, adding scheduled reminders and launching a pre-release program that will allow users to try out new features before they are ready for a wider rollout.

GitHub is also extending its sponsor program, which until now allowed you to tip individual open-source contributors for their work, to the project level. With GitHub Sponsors, anybody can help fund a project and the members of that project then get to choose how to use the money. These projects have to be open source and have a corporate or nonprofit entity attached to it (and a bank account).

“Developers are what’s driving us and we’re building the tools and the experiences to help them come together to create the world’s most important technologies and to do it on an open platform and ecosystem,” GitHub SVP of Product Shanku Niyogi told me. Today’s announcements, he said, are driven by the company’s mission to improve the developer experience. Over the course of the last year, the company launched well over 150 new features and enhancements, Niyogi stressed. For its Universe show, the company decided to highlight the new mobile app and notification enhancements, though.

The new mobile app, which is now out in beta for iOS, with Android support coming soon, offers all of the basic features you’d want from a mobile app like this. The team decided to focus squarely on the kind of mobile use cases that would make the most sense for a developer on the go, so you’ll be able to share feedback on discussions, review a few lines of code and merge changes, but this isn’t meant to be a tool that replicated the full GitHub experience, though at least on the iPad, you do get a bit more screen real estate to work with.

“When you start to look at the tablet experience, that then extends out because you now got more space,” explained Niyogi. “You can look at the code, you can navigate some of that, we support some of the key same keyboard shortcuts that github.com does to be able to look at a larger amount of content and a larger amount of code. So, the idea is the experience scales with the mobile devices you have, and but it’s also designed for the things you’re likely to do when you’re not using your computer.”

Others have built mobile apps for GitHub before, of course, and it turns out that the developers of GitHawk, which was launched by a group of engineers from Instagram, recently joined GitHub to help the company in its efforts to get this new app off the ground.

The second major new feature is the improved notifications experience. As every GitHub user on even a medium-sized team knows, GitHub’s current set of notifications can quickly become overwhelming. That’s something the GitHub team was also keenly aware of, so the company decided to build a vastly improved system that includes filters, as well as an inbox for all of your notifications right inside of GitHub.

“The experience for developers today can result in an inbox in Gmail or whatever email client you use with tons and tons of notifications — and it can end up being kind of hard to know what matters and what’s just noise,” Kelly Stirman, GitHub’ VP of Strategy and Product Management, said. “We’ve done a bunch of things over the last year to make notifications better, but what we’ve done is a big step. We’ve reimagined what notifications should be.”

Using filters and rules, developers can zero in on the notifications that matter to them, all without flooding your inbox with unnecessary noise. Developers can customize these filters to their hearts’ content. That’s also where the new mobile experience fits in well. “Many times, the notification will be sent to you when you’re not at your computer, when you’re not at your desktop,” noted Stirman. “And that notification might be somebody asking for your help to unblock something. And so it’s natural we think that we need to extend the GitHub experience beyond the desktop to a mobile experience.”

Talking about notifications: GitHub also today announced a new feature in a limited preview that adds a few more notifications to your inbox. You can now set up scheduled reminders for pending code reviews.

Among the rest of today’s announcements, the improved code search stands out because that’s definitely an area where some improvements were necessary. This new code search is currently in limited beta, but should roll out to all users over the next few months. It’ll introduce a completely new search experience, the company says, that can match special characters and casing, among other things.

Also new are code review assignments, now in public beta, and a new way to navigate code on GitHub.

13 Nov 2019

Volkswagen’s $800M Tennessee factory expansion to include battery pack plant

Volkswagen said Wednesday it will build a battery pack assembly facility as part of an $800 million expansion project that will turn the Chattanooga, Tenn. factory into its North American base for manufacturing electric vehicles.

The Chattanooga factory expansion, which includes a 564,000-square-foot addition to the body shop and is expected to create 1,000 new jobs at the plant, has been in the works for some time now. But the battery pack assembly announcement, while logical, came as a surprise.

“This is a big, big moment for this company,” Scott Keogh, president and CEO of Volkswagen Group of America said in a statement. “Expanding local production sets the foundation for our sustainable growth in the U.S. Electric vehicles are the future of mobility and Volkswagen will build them for millions of people.”

The automaker’s Chattanooga expansion is just a piece of its broader plan to move away from diesel in the wake of the emissions cheating scandal that erupted in 2015. Globally, VW Group plans to commit almost $50 billion through 2023 toward the development and production of electric vehicles and digital services.

The Tennessee factory (along with the other new facilities) will produce electric vehicles using Volkswagen’s modular electric toolkit chassis, or MEB, introduced by the company in 2016. The MEB is a flexible modular system — really a matrix of common parts — for producing electric vehicles that VW says makes it more efficient and cost-effective.

The company also built a European facility in Zwickau, Germany. Earlier this month, VW began production of the ID. 3 electric vehicle began at the Zwickau factory. By 2022, VW’s MEB vehicles will be produced at eight locations on three continents.

EV-production at facilities are expected to come online in Anting and Foshan in China in 2020, and in the German cities of Emden and Hanover by 2022.

Volkswagen currently produces the midsize Atlas SUV and the Passat sedan at the Chattanooga factory. Production of its electric vehicles is set to begin in Chattanooga in 2022. The first model will be an SUV of the ID. family.

13 Nov 2019

Volkswagen’s $800M Tennessee factory expansion to include battery pack plant

Volkswagen said Wednesday it will build a battery pack assembly facility as part of an $800 million expansion project that will turn the Chattanooga, Tenn. factory into its North American base for manufacturing electric vehicles.

The Chattanooga factory expansion, which includes a 564,000-square-foot addition to the body shop and is expected to create 1,000 new jobs at the plant, has been in the works for some time now. But the battery pack assembly announcement, while logical, came as a surprise.

“This is a big, big moment for this company,” Scott Keogh, president and CEO of Volkswagen Group of America said in a statement. “Expanding local production sets the foundation for our sustainable growth in the U.S. Electric vehicles are the future of mobility and Volkswagen will build them for millions of people.”

The automaker’s Chattanooga expansion is just a piece of its broader plan to move away from diesel in the wake of the emissions cheating scandal that erupted in 2015. Globally, VW Group plans to commit almost $50 billion through 2023 toward the development and production of electric vehicles and digital services.

The Tennessee factory (along with the other new facilities) will produce electric vehicles using Volkswagen’s modular electric toolkit chassis, or MEB, introduced by the company in 2016. The MEB is a flexible modular system — really a matrix of common parts — for producing electric vehicles that VW says makes it more efficient and cost-effective.

The company also built a European facility in Zwickau, Germany. Earlier this month, VW began production of the ID. 3 electric vehicle began at the Zwickau factory. By 2022, VW’s MEB vehicles will be produced at eight locations on three continents.

EV-production at facilities are expected to come online in Anting and Foshan in China in 2020, and in the German cities of Emden and Hanover by 2022.

Volkswagen currently produces the midsize Atlas SUV and the Passat sedan at the Chattanooga factory. Production of its electric vehicles is set to begin in Chattanooga in 2022. The first model will be an SUV of the ID. family.