Year: 2018

08 May 2018

You can now run Linux apps on Chrome OS

For the longest time, developers have taken Chrome OS machines and run tools like Crouton to turn them into Linux-based developer machines. That was a bit of a hassle, but it worked. But things are getting easier. Soon, if you want to run Linux apps on your Chrome OS machine, all you’ll have to do is switch a toggle in the Settings menu. That’s because Google is going to start shipping Chrome OS with a custom virtual machine that runs Debian Stretch, the current stable version of the operating system.

It’s worth stressing that we’re not just talking about a shell here, but full support for graphical apps, too. That means you could now, for example, run Microsoft’s Linux version of Visual Studio Code right on your Chrome OS machine. Or build your Android app in Android Studio and test it right on your laptop, thanks to the built-in support for Android apps that came to Chrome OS last year.

The first preview of Linux on Chrome OS is now available on the Pixelbook, with support for more devices coming soon.

Google’s PM director for Chrome OS Kan Liu told me the company was obviously aware that people were using Crouton to do this before. But doing this also meant doing away with all of the security features that come with Google’s operating system. And as more powerful Chrome OS machines hit the market in recent years, the demand for a feature like this also grew.

To enable support for graphical apps, the team opted to integrate the Wayland display server; from the user’s perspective, the actual window dressing will look the same as any other Android or web app on Chrome OS.

Most regular users won’t necessarily benefit from built-in Linux support, but this will make Chrome OS machines even more attractive to developers — especially the more high-end ones like Google’s own Pixelbook. Liu stressed that his team spent quite a bit of work optimizing the virtual machine, too, so there isn’t a lot of overhead when you run Linux apps — meaning that even less powerful machines should be able to handle a code editor without issues.

Now, it’s probably only a matter of hours before somebody starts running Windows apps in Chrome OS with the help of the Wine emulator.

08 May 2018

Google’s new Android App Bundles promise to make apps radically smaller

Google today announced Android App Bundles, a new tool for developers that will make apps radically smaller. The trick here is that developers can now say which of their apps’ assets should be included for a given device so there’s no need to ship every visual asset for every screen size and support for every language to every user, for example — something many developers do today. That can result in install files that can sometimes be more than 50 percent smaller than before.

As Google’s Stephanie Cuthbertson told me, large download sizes are often an issue for users in developing countries, but elsewhere, too, users often balk at installing large apps. “Apps are targeting more countries than over, they have more features than ever,” she told me. “But we know the larger apps are, the fewer installs they get.”

To enable this new feature, Google rearchitected its whole app serving stack. As Cuthbertson noted, that was a major project. Since the Android team had been toying with this idea for a while, though, most of the Android platform was ready for this change.

So while the standard APK format isn’t going to change, every user now essentially gets a somewhat personalized file when hitting the Install button in Google Play.

Google says it trialed this service with some of its own apps already, including the YouTube and Google apps. A couple of other partners also tested it already; Microsoft, for example, saw a 23 percent file reduction for the LinkedIn app.

Most of the hard work to enable this feature is handled by Google, but developers who want to make use of it do have to specify which assets and languages they want to ship to which users. As Cuthbertson noted, much of this was possible before, but it was hard to do for developers. Now, they can use the same development flow as before and only have to make some very minor changes to enable support for App Bundles.

In addition to delivering the full app through an App Bundle, Google is also today introducing a related new tool: dynamic features. This essentially allows developers to make their apps modular. As Cuthbertson noted, that may be especially interesting to developers whose apps offer lots of features, some of which may only see usage by a very small number of users. For those users, developers can simply ship that feature on demand when they attempt to use it. Developers can start experimenting with these features in the latest canary release of Android Studio.

08 May 2018

Google’s Android development studio gets a new update with visual navigation editing

Android’s development studio is getting a new update as Google rolls out Android Studio 3.2 Canary, adding new tools for visual navigation editing and Jetpack.

The new release includes build tools for the new Android App Bundle format, Snapshots, a new optimizer for smaller app code and a new way to measure an app’s impact on battery life. The Snapshots tool is baked into the Android Emulator and is geared toward getting the emulator up and running in two seconds. All this is geared toward making Android app development easier as the company looks to woo developers — especially potentially early ones — into an environment that’s built around creating Android apps.

The visual navigation editing looks a bit like a flow chart, where users can move screens around and connect them. You can add new screens, position them in your flow, and under covers will help you manage the whole stack in the background. Google has increasingly worked to abstract away a lot of the complex elements of building applications, whether that’s making its machine learning framework TensorFlow more palatable by letting developers create tools using their preferred languages or trying to make it easier to build an app quickly. Visual navigation is one way to further abstract out the complex process of programming in different activities within an app.

As competition continues to exist between Apple and Google, it’s important that Google ensures that the apps are launching on Google Play in order to continue to drive Android device adoption. The sped-up emulator, in particular, may solve a pain point for developers that want to rapidly test parts of their apps and see how they may operate in the wild without having to wait for the app to load in an emulator or on a test device.

08 May 2018

Android gets a Jetpack

At its I/O developer conference, Google today announced Jetpack, a major update to how developers write applications for Android . Jetpack represents the next generation of the Android Support Library, which virtually every Android App in the Play Store uses because it provides a lot of the basic functionality that you would expect from a mobile app. It’s also the next step in the work that the company has been doing with architecture components, a feature it launched at last year’s I/O.

Jetpack combines the existing Android support libraries and components and wraps them into a new set of components (including a couple of new ones) for managing things like background tasks, navigation, paging, and life-cycle management, as well as UI features like emoji and layout controls for various platforms like Android Wear, Auto and TV, as well as some more foundation features like AppCompact and Test.

It’s important to note that developers can choose whether they want to use Jetpack. Ahead of today’s announcement, Stephanie Saad Cuthbertson, Google’s product management director for Android, told me the company will continue to release all updates in both the support libraries and Jetpack.

Cuthbertson also stressed that the general idea here is to remove some of the repetitive grunt work that comes with writing new apps and help developers get more done while writing less code. New components that will go live with Jetpack today include WorkManager, Paging, Navigation and support for Slices, the newly launched feature for highlighting results from installed apps in Google Search and the Google Assistant. WorkManager handles background jobs, while the Navigation component helps developers structure their in-app UI. The Paging Component lets developers break down the data they pull from a server into — you guessed it — pages, so they can display results faster.

All of these new components, except for the Paging component, which is stable, are officially still in alpha.

It’s worth noting that Jetpack was very much designed with the Kotlin programming language in mind. It was only last year that Google elevated Kotlin to a first-class language in the Android ecosystem; 28 out of the top 100 apps in the Google Play store already use it. Cuthbertson also noted that 95 percent of the developers who use Kotlin say they are happy with using it for Android development.

With today’s launch of Jetpack, Google is also launching Android KTX, a set of Kotlin extensions for Android, as well as Jetpack support in the latest canary release of Android Studio 3.2.

08 May 2018

Africa Roundup: Safaricom unveils Bonga, Africa’s Talking gets $8.6M, TechCrunch visits Nigeria, Ghana

March and April saw fresh VC in Africa, mobile money merge with social media, motorcycle taxis digitize, and TechCrunch in Nigeria and Ghana.

Africa’s Talking with more VC

Business APIs on the continent got a boost from global venture capital thanks to an $8.6 million round for Africa’s Talking—a Kenyan based enterprise software company.

The new financing was led by IFC Venture Capital, with participation from Orange Digital Ventures and Social Capital.

Africa’s Talking works with developers to create solution focused APIs across SMS, voice, payment, and airtime services. The company has a network of 20,000 software developers and 1000 clients including solar power venture  M-Kopa and financial conglomerate Ecobank.

Africa’s Talking operates in Kenya, Uganda, Rwanda, Malawi, Nigeria, Ethiopia, and Tanzania and maintains a private cloud space in London. Revenues come primarily from fees on a portion of the transactional business its solutions generate.

The company plans to use the round to hire in Nairobi. It will also expand in other African geographies and invest in IoT, analytics, payments, and cloud offerings.

CEO Samuel Gikandi and IFC Ventures’ Wale Ayeni offered insight on the round in this TechCrunch feature.

M-Pesa the social network

 Kenya’s largest telco, Safaricom, rolled out a new social networking platform called Bonga, to augment its M-Pesa mobile money product.

“Bonga is a conversational and transactional social network,” Shikoh Gitau, Alpha’s Head of Products, told TechCrunch.

The new platform an outgrowth of the Safaricom’s Alpha innovation incubator.

“[Bonga’s] focused on pay, play, and purpose…as the three main things our research found people do on our payment and mobile network,” Gitau said—naming corresponding examples of e-commerce, gaming, and Kenya’s informal harambee savings groups.

Safaricom offered Bonga to a test group of 600 users and will soon allow that group to refer it to friends as part of a three-phase rollout.

The platform will channel Facebook, YouTube, iTunes, PayPal, and eBay. Users will be able to create business profiles parallel to their social media profiles and M-Pesa accounts to sell online. Bonga will also include space for Kenya’s creative class to upload, shape, and distribute artistic products and content.

Safaricom may take Bonga to other M-Pesa markets: currently 10 in Europe, Africa, and South Asia. The Bonga announcement came shortly after Safaricom announced a money-transfer deal with PayPal.

TLcom’s $5M for Terragon

Venture firm TLcom Capital made a $5 million investment in Nigerian-based Terragon Group, the developer of a software analytics service for customer acquisition.

Located in Lagos, Terragon’s software services give clients — primarily telecommunications and financial services companies — data on Africa’s growing consumer markets.

Products allow users to drill down on multiple combinations of behavioral and demographic information and reach consumers through video and SMS campaigns while connecting to online sales and payments systems, according to the company.

Terragon has a team of 100 across Nigeria, Kenya, Ghana and South Africa and clients across consumer goods, financial services, gaming, and NGOs.

The company generates revenue primarily on transaction facilitation for its clients. Terragon bootstrapped itself into the black with revenues of $4 and $5 million dollars a year, CEO Elo Umeh told TechCrunch

Uber and Taxify’s motorcycle move

Global ride-hailing rivals Taxify and Uber launched motorcycle passenger service in East Africa. Customers of both companies in Uganda and Taxify riders in Kenya can now order up two-wheel transit by app.

The moves come as Africa’s moto-taxis — commonly known as boda boda­s in the East and okadas in the West—upshift to digital.

Taxify’s app now includes a “Boda” button and Uber’s a “uberBoda” icon to order up a two-wheel taxi transit via phone in respective countries.

Other players in Africa’s motorcycle ride-hail market include Max.ng in Nigeria, SafeBoda in Uganda, and SafeMotos and Yego Moto in Rwanda.

Both Taxify and Uber look to expand motorcycle taxi service in other African countries—full story here at TechCrunch.

TechCrunch in Nigeria and Ghana

And finally, TechCrunch visited Nigeria and Ghana in April. We hosted meet and greets at several hubs and talked with dozens of startup heads—including a few BFX Africa 2017 alumns, such as Agrocenta  at Impact Hub Accra and FormPlus  in Lagos. TechCrunch is scouting African startups to include in the global Startup Battlefield competitions. Applications to participate in Startup Battlefield at Disrupt San Francisco are open at apply.techcrunch.com.

More Africa Related Stories @TechCrunch

  • DFS Lab is helping the developing world bootstrap itself with fintech

African Tech Around the Net     

08 May 2018

Google makes the camera smarter with a Google Lens update, integration with Street View

Google today showed off new ways it’s combining the smartphone camera’s ability to see the world around you, and the power of A.I. technology. The company, at its Google I/O developer conference, demonstrated a clever way it’s using the camera and Google Maps together to help people better navigate around their city, as well as a handful of new features for its previously announced Google Lens technology, launched at last year’s I/O.

The maps integration combines the camera, computer vision technology, and Google Maps with Street View.

The idea is similar to how people navigate without technology – they look for notable landmarks, not just street signs.

With the camera/Maps combination, Google is doing that now, too. It’s like you’ve jumped inside Street View, in fact.

In the interface, the Google Maps user interface is at the bottom of the screen, while the camera is showing you what’s in front of you. There’s even an animated guide (a fox) who you can follow to find your way.

The feature was introduced ahead of several new additions for Google Lens, Google’s smart camera technology.

Already, Google Lens can do things like identify buildings, or even dog breeds, just by pointing your camera at the object (or pet) in question.

With an updated version of Google Lens, it will be able to identify text too. For example, if you’re looking at a menu, you could point the camera at the menu text in order to learn what a dish consists of – in the example on stage, Google demonstrated Lens identifying the components of ratatouille.

However, the feature can also work for things like text on traffic signs, posters or business cards.

Google Lens isn’t just reading the words, it’s understanding the context, which is what makes the feature so powerful. It knows that you want to understand the menu item, for instance.

Another new feature called Style Match is similar to the Pinterest-like fashion search option that previously launched in Google Images. 

With this, you can point the camera at an item of clothing – a shirt or pants – or even accessories like a handbag – and Lens will find items that match that piece’s style. It does this by running searches through millions of items, but also by understanding things like different textures, shapes, angles and lighting conditions.

Finally, Google Lens is adding real-time functionality, meaning it will actively seek out things to identify when you point the camera at the world around you, then attempt to anchor its focus to a given item and present the information about it.

It can also display the results of what it finds on top of things like store fronts, street signs or concert posters.

“The camera  is not just answering questions, but putting the answers right where the questions are,” noted Aparna Chennapragada, Head of Product for Google’s Augmented Reality, Virtual Reality and Vision-based products (Lens), at the event.

Google Lens has previously been available in Photos and Google Assistant, but will now be integrated right into the Camera app across a variety of top manufacturer’s devices. (See below).

The updated features for Google Lens will arrive in the next few weeks.

08 May 2018

Uber adds another flying taxi partner

Uber has teamed up with Karem Aircraft to develop electric vertical takeoff and landing (eVTOL) vehicles for the ride-hailing company’s upcoming flying taxi service, the companies announced today at Uber Elevate.

Karem Aircraft, which has patented Optimum Speed Tiltroter technology for military and commercial applications, has been working with Uber for about a year to create the Butterfly concept. This type of vehicle is supposed to be a passenger-friendly adaptation of Karem’s core technology.

“We were always dreaming of doing things commercially, but all of our funding came from the military,” Karem Aircraft founder Abe Karem told TechCrunch ahead of the announcement. “What we were doing was advanced and labeled ‘risky.'”

Now, Karem is able to do what people previously thought was impossible, Karem said. The Butterfly (rendered above) is a quad tiltrotor with four large rotors mounted on the wings and tail. The idea is to combine the vertical lift capability of a helicopter with the speed and range of a fixed-wing aircraft. The Butterfly is also designed to be more efficient as a result of its rotors with variable RPM.

“Variable RPM allows us to maintain good efficiency across a wide range of rotor thrust,” Karem Aircraft CEO Ben Tigner told me.

This partnership comes a little over one year after Uber announced a number of vehicle partnerships with established aeronautics and VTOL manufacturers at last year’s Elevate event. Other partners include Aurora Flight Sciences, Embraer, Bell Helicopter, Pistrel Aircraft, Mooney and ChargePoint. That’s because Uber itself isn’t building any vehicles — it’s relying on its partners to do that.

Earlier today, Uber unveiled its common reference model design concepts, with the goal to encourage companies and eVTOL manufacturers to design prototypes with uberAIR in mind. For example, the design model requires the propeller blades to be as high as possible in order to ensure people don’t have to duck while they’re boarding and exiting the aircraft.

As long as vehicle manufacturers can adhere to Uber’s common reference designs, they will be eligible to participate in Uber Elevate. By 2020, Uber envisions having multiple vehicle partners ready, Uber Head of Aviation Eric Allison told me, “but we’re not going to launch them if they’re not ready.”

The idea with Uber Elevate is to create an ecosystem with partners across the entire spectrum — batteries, skyports, vehicles and so forth, Allison said.

“We believe that this is a potentially huge market and it’s not just about the ecosystem,” he said. “You need the right ground infrastructure, as well as vehicles to make the overall system be much more useful on a larger scale than small plane aviation is today.”

Uber Elevate’s ultimate goal is to launch and operate a ridesharing network of small, electric aircrafts worldwide that can carry four people at any given time.

Other fun facts:

  • Uber hopes to demonstrate flights in 2020
  • uberAIR will be commercially available in 2023 in Dallas-Fort Worth and Frisco, Texas, and Los Angeles (Uber is no longer aiming for Dubai in its initial launch)
  • uberAIR hopes to hit speeds of up to 200 mph
  • uberAIR can travel up to 60 miles on a single charge
  • Cruising altitude must be ~ 1,000-2,000 feet above ground

Given that the airspace is much more regulated, Uber is prepared to create core systems that enable the entire ecosystem to operate. That means developing an airspace management system that is a more complex version of what we know today as air traffic control.

“Air transport is much more regulated,” Allison said, “and needs to be much more highly coordinated. It can’t be a total free-for-all in the sky.”

In order to achieve all of this, Uber will need skyports. Uber has also teamed up with real estate companies like Hillwood Properties and Sandstone Properties to create skyports for the uberAIR network.

Earlier today, Allison showed off some early design concepts. Here’s a quick look at one that could handle 1,000 landings per hour.

The below skyport, which Uber envisions sitting on top of a parking garage, could handle about 100 landings per hour.

I’ll be at Uber Elevate today and tomorrow, so be on the lookout for more news.

08 May 2018

Watch Google I/O developer keynote live right here

Google I/O is nowhere near done. While the mainstream keynote just ended, the company is about to unveil the next big things when it comes to APIs, SDKs, frameworks and more.

The developer keynote starts at 12:45 PM Pacific Time (3:45 PM on the East Cost, 8:45 PM in London, 9:45 PM in Paris) and you can watch the live stream right here on this page.

If you’re an Android developer, this is where you’ll get the juicy details about the next version of Android. You can expect new possibilities and developer tools for you and your company. We’ll have a team on the ground to cover the best bits right here on TechCrunch.

08 May 2018

Google Maps will soon give you better recommendations

Google will soon launch a new version of Google Maps that will give you more personalized recommendations than before. Google has long worked to make Maps seem more personalized, but since Maps is now about far more than just directions, the company is introducing new features to give you better recommendations for local places.

“Today, our users aren’t just asking for the fastest route to a place but also what’s happening around them, what the new places are and what the locals are doing in their neighborhood,” Google VP for engineering and product management Jen Fitzpatrick noted in today’s keynote.

The first new feature to enable this is the ‘for you’ tab. This new part of Google Maps will learn from your personal preferences and tell you about what’s new in your neighborhood (or other neighborhoods you are watching). Maybe there’s a new cafe or restaurant.

Over the course of the last few years, reviews in Google Maps have also become increasingly important. But what does a four-star review really mean? So going forward, Google Maps will take those reviews and mash them up with what it knows about you to give you a more personalized score based on your context and interests.

Another — not AI-related — feature Google is adding to Maps is a new Group Planning feature that’ll allow you to long press on a place and then add them to a shareable list.

08 May 2018

Google launches a Partner Program and developer API for Google Photos

Along with the A.I.-powered changes coming to the Google Photos app, Google today also announced a new way for developers to work with Google Photos’ service. The company is launching a developer API that will allow other apps and services access to connect to, upload to, and share to a users’ Google Photos library, as well as a Partners Program for Google Photos.

Early partners include HP, Legacy Republic, NixPlay, Xero and TimeHop.

“This is really about allowing users to use their content across the apps and products that they they own or that they love – to take that magic of Google Photos and make it helpful to them wherever they need it,” said Google Photos Product Manager Ben Greenwood.

The Google Photos Library API lets developers help users easily find photos based on what’s in the photo, where it was taken or other attributes; upload directly to users’ library; organize albums and add titles and locations; and use shared albums to collaborate and transfer photos.

Google is kicking off the launch of the new API by showcasing a handful of developers who are already adopting the technology.

One is a smart photo frame maker called Nixplay, whose device can link to your Google Photos account in order to access your shared albums and display them on frame’s screen. (This could help address an issue with other frame makers as well – sometimes the frame is nice, but the software to configure it is not so pretty.)

Another API partner is the accounting and expensing software Zero, which will be able to connect to your Google Photos account to do automatic receipt matching. That is, all you’ll have to do is take a photo of a receipt and it does the rest.

A company called Legacy Republic, which offers photo digitization services to consumers will use the API to upload its scans right to your Google Photos account.

“Not only does this collaboration further our shared mission of offering consumers a single, A.I.-powered platform for a lifetime of memories, it’s also a critical step forward in realizing Legacy Republic’s vision of reinventing photo commerce,” said Brian Knapp, co-founder & CEO of Legacy Republic, in a statement. “Imagine photos of your wedding day automatically resurfacing on your anniversary – side by side with those of your parents’ and grandparents’ – all through your Google Photos account. With Legacy Republic’s Memory Makeovers and the Google Photos platform, this supercharged type of ‘Then and Now’ offering will soon become a reality,” he added.

The Google Photos Library API is launching into developer preview, Google announced today at its I/O conference. It will then become generally available in a few months’ time. In the meantime, Google is offering developer documentation and sign-up for the Google Photos partner program.