Category: UNCATEGORIZED

01 Oct 2019

Coding training and outsourcing service Catalyte launches a toolkit for corporate ‘up-skilling’

Catalyte, the Baltimore-based coding training and placement service, has launched a new software service designed to take its machine learning-based skills-assessment and training program to companies around the country.

With revenues already approaching nearly $100 million for its outsourced software development services, Catalyte is hoping to take the lessons and tools it has learned and developed over the course of its 18-year history as a staffing and training company for the tech industry and sell them to companies looking to retrain . or provide additional skills development opportunities for their employees.

“Even if we were the largest employer in the world we still would not be able to move the needle on the labor economy,” says Catalyte’s chief executive, Jacob Hsu. 

He sees the company’s mission as providing a critical step for companies to identify the employees in their workforce with the skills to become coders and an opportunity for those employees to then receive the training they need to move into higher paying roles as software eats into low-skilled, repetitive labor.

“We’re encouraging all of these employers to deploy these up-leveling skills,” Hsu says.

At Catalyte, the company’s success has hinged on practicing what it preaches (and what it’s now selling). Launched in 2000 as a staffing service in Baltimore called Catalyst Devworks by a former White House economist, Michael Rosenbaum, the company expanded to locations in Chicago and Portland and offers training and workforce development through contracted consulting projects with companies.

Photo courtesy of Getty Images

The company’s recruits come from anywhere and everywhere and hiring hinges on a skills test would-be employees have to perform which is monitored by software that tracks how test-takers respond to the company’s questions.

Once an applicant passes the test, they’re brought in for training and given a two-year contract during which time they’re put to work on development projects Catalyte has won from customers like Under Armor, Aetna, AT&T and Microsoft .

Catalyte’s developers are paid roughly $40,000 per-year (less than half of what a developer typically makes) while they’re working under the two-year contract and are then allowed to seek employment outside of the company. Any employee that breaks the mandatory two-year contract is subject to a $25,000 penalty, according to a report in “Fast Company”. As they enter the third year, their contract with Catalyte gets renegotiated and employees who stay with the company can earn at least $75,000.

“We’re taking people from all walks of life,” says Hsu. “The average salary is $25,000 for people who have come in to the program… But within five years from working with the company, the average salary is $98,000.”

It’s this kind of narrative, and the company’s solid revenue that attracted investors like Steve Case, who’s backing Catalyte through his $150 million Rise of the Rest Seed Fund.

In 2018, Catalyte raised roughly $27 million in a round of funding from Palm Drive Capital, Cross Culture ventures, Expon Capital, and the Rise of the Rest Seed Fund.

The relatively novel approach to training and hiring (with some of the company’s recruits even coming in through Craiglist ads that pitch getting paid for learning to code) has netted Catalyte some impressive statistics when it comes to the diversity of its workforce — another important criteria for Case’s Rise of the Rest fund.

“When you use this approach to hiring [in a city]… you end up with a workforce that’s similar to the demographics of a city,” says Hsu.

In Baltimore the company’s workforce is about 29% African American and 30% of the developers are women. The average age of a programmer in the company’s workforce is 33 years-old and education levels range from about one quarter with only a college degree to college-educated candidates. 

Catalyte’s growth over the past three years has been nothing short of explosive. The company went from 50 employees in 2016 to around 800 people on staff now.

That staff is critical not just to the company’s current business model, but also served as a training tool for the machine learning and assessment tools that Catalyte is now trying to sell. “We spent over a decade collecting outcome data from engineering projects,” says Hsu. And that data was what was used to create the company’s metrics for whether or not a candidate for a programming job at the company would be successful.

The company intends to bring its assessment tool to market in the fourth quarter, but on the back of its recent fundraising, Catalyte has been ramping up its research and development activities. It wants to begin putting together a curriculum around cybersecurity and site reliability engineers. The software will cost roughly $1,000 per seat for every employee that receives its training regime.

“One of the fundamental ways our economy is going to both remain competitive on the international level and expand opportunities to more Americans is by changing the way we identify talent,” said Case in a statement discussing Catalyte’s financing last year. “Catalyte proved to us that not only can it bring new and underrepresented groups into the fold, it can do so while helping its own clients grow.”

While the company is growing its product pipeline, it also intends to expand the number of development and training centers it operates. The plan, according to an interview Hsu gave to the local technology news site Technically Baltimore in February, is to have 20 development centers around the country by 2020.

 

01 Oct 2019

Defining micromobility and where it’s going with business and mobility analyst Horace Dediu

Micromobility has taken off over the last couple of years. Between electric bike-share and scooter-share, these vehicles have made their way all over the world. Meanwhile, some of these companies, like Bird and Lime, have already hit unicorn status thanks to massive funding rounds.

Horace Dediu, the well-known industry analyst who coined the term micromobility as it relates to this emerging form of transportation, took some time to chat with TechCrunch ahead of Micromobility Europe, a one-day event focused on all-things micromobility.

We chatted about the origin of the word micromobility, where big tech companies like Apple, Google and Amazon fit into the space, opportunities for developers to build tools and services on top of these vehicles, the opportunity for franchising business models, the potential for micromobility to be bigger than autonomous, and much more.

Here’s a Q&A, which I lightly edited for length and clarity, I did with Dediu ahead of his micromobility conference.


Megan Rose Dickey: Hey, Horace. Thanks for taking the time to chat.

Horace Dediu: Hey, no problem. My pleasure.

Rose Dickey: I was hoping to chat with you a bit about micromobility because I know that you have the big conference coming up in Europe, so I figured this would be a good time to touch base with you. I know you’ve been credited with coining the term micromobility as it relates to likes of shared e-bikes and scooters.

So, to kick things off, can you define micromobility?

Dediu: Yes, sure. So, the idea came to me because I actually remembered microcomputing.

01 Oct 2019

Unagi is the iPhone of scooters you actually buy

Can you never find a scooter to rent when you need one? Here’s a radical idea. Buy one. While Bird, Lime, Skip, Scoot, Uber, Lyft and more compete for on-demand micromobility, a new startup invented a vehicle worthy of ownership. The Unagi looks downright futuristic with its classy paint jobs, foldable body, LED screen, and built-in lights. The ride feels sturdy, strong, and responsive while being light enough at 24lbs to lug up subway stairs or the flights to your home.

That’s why Unagi has become a hit with musicians like Kendrick Lamar, Chance The Rapper, Halsey, Steve Aoki, and teen pop megastar Billie Eilish, who use the scooter to rip around the empty venues as they soundcheck before concerts. Paparazzi shots of those moments have spurred demand for the $990 dual motor and $840 single motor Unagis, with co-founder David Hyman telling me the startup can’t make them fast enough but it’s ramping up production.

Unagi Scooter

To fuel the fervor for the scooter before it’s inevitably copied by cheap knock-offs, Unagi has raised a $3.15 million seed round led by Menlo Ventures. Building on its $750,000 in Kickstarter, angel, and founder-contributed funding, the cash will go to building out a distribution network and developing its next-gen scooter with a smoother ride but no more pounds.

“We felt Unagi’s focus on light weight and substantial powering in a beautifully designed package was the right approach for ownership” Menlo partner Shawn Carolan tells me. “This is what premium brands do – continue to reinvent the way we think about the world. This category of vehicle – personal, portable, and electric has enormous potential and we are still in the first inning of the game.”

The magic of the Unagi Model One is how it balances speed, battery, weight, price, and style so it works for most anything and everyone. That combination won it CNET‘s best all-around scooter award versus the hardcore but extremely heavy Boosted Rev, cheap but weak Swagtron, long-lasting but boring Ninebot, and speedy but scary Mercane.

The Unagi’s biggest flaw is the smoothness of the ride due to its harder airless wheels and narrow handlebars that can make gravelly roads precarious. The high-pitched beeeeeep of its horn is also so annoying that people are more likely to cover their ears than get out of your way, but Hyman promises his 12-person team will fix that.

Unagi Handlebars

Where Unagi truly excels is in its looks. The lithe curves of its polished carbon fiber frame are accented with candy paint jobs in matte black, white, grey, and blue. It ditches the bike handlebar vibe for something closer to space shuttle controls. And while many people scoff at scooter riders, I saw those smirks turn into curious awe as I flew by.

Unagi Scooter Weight 1Hyman got the idea for a premium scooter you own after a rental turned into a melty mess. He’d taken an on-demand scooter to the grocer on a hot day, picked up some ice cream, and emerged to find his ride snatched by another user. He hustled to another nearby but someone else got their first. He walked home dripping sugar everywhere wondering “Why am I messing around with rentals i just want to own one?”

He bought a generic scooter off Alibaba, and despite being janky straight out of the box “it made me feel like I was a super hero with this magic carpet”. But he wanted something better.

Previously the CEO of audio fingerprinting giant Gracenote and then Beats Music before it sold to Apple, Hyman is known for his obsession with hi-fi speaker systems. So after touring Chinese scooter factories and still being unsatisfied, he partnered with a group of inventors called QMY who’d prototyped a slick vehicle they called the Swan. Hyman funded it to production, brought the team in house, and now they’re selling Unagis as fast as they can.

Now the startup wants to double-down on selling to more petite riders who could never carry the 46lb Boosted Rev out of a train station. But the clock is ticking before copycats with similar silhouettes but inferior insides spring up. Meanwhile, Unagi must keep safety top-of-mind to avoid any disastrous crashes hurting customers and its brand. There are also plenty of better funded mobility giants that could barge into the space if Unagi can’t build a lead.

Unagi Scooter Blue 5

Scooters are part of a powerful wave of new technologies that actually sell us back our time. When a 20-minute walk becomes a 4-minute scoot, you gain something priceless. Urban landscapes unfold beneath their wheels as you explore new neighborhoods or parts of parks. I was once a diehard electric skateboarder until a crash on a Boosted Board shattered my ankle. Unagi is the first scooter that delivers that same gliding feeling of weightlessness and freedom but in a form-factor safe enough for most people to experience.

01 Oct 2019

GoPro launches new Hero8 Black and MAX action cameras

GoPro has released new versions of both its hero line, and its newer 360-degree ruggedized action cameras. The $399 GoPro Hero8 Black’s most significant change is that it gains a new body design that incorporates GoPro’s signature mounting system right into the case, so that you no longer need add-on frames to attach it to selfie sticks, suction mounts, body mounts and more.

The GoPro Hero8 Black shoots at resolution between 1080p and 4K, and also gains HyperSmooth 2.0, the aptly named second generation version of GoPro’s proprietary digital stabilization technology. The first version, which premiered on the GoPro Hero7, was hailed for its effectiveness, and the follow-up is apparently even more powerful – plus, it provides new adjustment options so you can tweak how aggressive it is.

GoPro’s proprietary variable speed recording mode TimeWarp also gets upgraded to 2.0, and there’s better on-board wind suppression for mic-free recording. The body changes mean that the lens is no longer removable, but GoPro is planning to release a new mounting system for filters soon to make up for this limitation.

On top of the new design, there’s a series of new aftermarket add-ons, which GoPro calls “Mods,” to provide add-on features. There’s a Media Mod ($79.99) that includes a built-in shotgun mic; a Display Mod ($79.99) which has a flip up LCD viewfinder for vlogging, and a Light Mod ($49.99) which has a 200 lumen LED continuous video light source.

The other new camera, the GoPro MAX, is a $499 successor to the GoPro Fusion, and provides 360 capture. It’s designed to also produce great single lens, traditional wide angle footage, and has its own version of HyperSmooth stabilization called Max HyperSmooth (which you know much be extreme because it’s called ‘Max’).

The MAX seems less oriented at 360 video and more at advanced content creators who want maximum editing flexibility and the ability to more easily vlog, since it also includes a front-facing display.

GoPro faces increased competition from legit sources in their home category, including competing devices from DJI and Insta360, but the slate of new upgrades here really do sound like quality, meaningful improvements vs. the existing Hero7, and the new all-in-one body design should make it even more convenient for general use while out on the go.

Pre-orders are live now for the cameras, with shipping starting on October 15 for the GoPro Hero8, and shipments for the Max starting on October 24.

01 Oct 2019

Daily Crunch: WeWork delays its IPO

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. WeWork withdraws its S-1 filing, will delay its IPO

The move was widely expected, but The We Company (which owns WeWork) made it official yesterday, with new co-CEOs Artie Minson and Sebastian Gunningham declaring that they’ve “decided to postpone our IPO to focus on our core business.”

Since the company’s S-1 became public, it has faced intense scrutiny over the general state of its finances, and more specifically over the power and behavior of Adam Neumann, who stepped down as CEO last week.

2. Europe’s top court says active consent is needed for tracking cookies

It’s a decision that plunges many websites into legal hot water in Europe. The Court says consent must be obtained prior to storing or accessing non-essential cookies, such as tracking cookies for targeted advertising.

3. Twitter launches its anti-abuse filter for Direct Messages

Twitter is rolling out its spam and abuse filter for Direct Messages, a month and a half after the company announced it had started testing the feature. This should be useful for people who want to keep their DMs open without having to see abusive content.

4. Microsoft OneDrive Personal Vault rolls out worldwide, launches expandable storage

Earlier this summer, Microsoft introduced an extra layer of security for its OneDrive product, allowing users to protect their files with two-step verification. Now it’s rolling this feature out worldwide.

5. Pandora puts its personalization powers to work in a revamped app

The company’s new mobile experience includes a dedicated “For You” tab where a continually updated feed of content is presented to users, including music and podcast recommendations.

6. Rapyd raises $100M for its ‘fintech as a service’ API, now valued at nearly a $1B valuation

Currently, Rapyd lets customers use its API to enable checkout, funds collection, fund disbursements, compliance as a service, foreign exchange, card issuing and integration.

7. SmartNews’ head of product on how the news discovery app wants to free readers from filter bubbles

SmartNews’ Jeannie Yang talks about the app’s place in the media ecosystem, creating recommendation algorithms that don’t reinforce biases, the difference between its Japanese and American users and the challenges of presenting political news in a highly polarized environment. (Extra Crunch membership required.)

01 Oct 2019

Apple launches Deep Fusion feature in beta on iPhone 11 and iPhone 11 Pro

Apple is launching an early look at its new Deep Fusion feature on iOS today with a software update for beta users. Deep Fusion is a technique that blends multiple exposures together at the pixel level to give users a higher level of detail than is possible using standard HDR imaging — especially in images with very complicated textures like skin, clothing or foilage.

The developer beta released today supports the iPhone 11 where Deep Fusion will improve photos taken on the wide camera and the iPhone 11 Pro and Pro Max where it will kick in on the telephoto and wide angle but not ultra wide lenses. 

According to Apple, Deep Fusion requires the A13 and will not be available on any older iPhones. 

As I spoke about extensively in my review of the iPhone 11 Pro, Apple’s ‘camera’ in the iPhone is really a collection of lenses and sensors that is processed aggressively by dedicated machine learning software run on specialized hardware. Effectively, a machine learning camera. 

Deep Fusion is a fascinating technique that extends Apple’s philosophy on photography as a computational process out to its next logical frontier. As of the iPhone 7, Apple was blending output from the wide and telephoto lenses to provide the best result. This process happened without the user ever being aware of it. 

E020289B A902 47A8 A2AD F6B31B16BEC8 52C2191B 02DD 41ED B33F 19279C89CA42

Deep Fusion continues in this vein. It will automatically take effect on images that are taken in specific situations.

On wide lens shots, it will start to be active just above the roughly 10 lux floor where Night Mode kicks in. The top of the range of scenes where it is active is variable depending on light source. On the telephoto lens, it will be active in all but the brightest situations where Smart HDR will take over, providing a better result due to the abundance of highlights.

Apple provided a couple of sample images showing Deep Fusion in action which I’ve embedded here. They have not provided any non-DF examples yet, but we’ll see those as soon as the beta gets out and people install it. 

Deep Fusion works this way:

The camera shoots a ‘short’ frame, at a negative EV value. Basically a slightly darker image than you’d like, and pulls sharpness from this frame. It then shoots 3 regular EV0 photos and a ‘long’ EV+ frame, registers alignment and blends those together. 

This produces two 12MP photos which are combined into one 24MP photo. The combination of the two is done using 4 separate neural networks which take into account the noise characteristics of Apple’s camera sensors as well as the subject matter in the image. 

This combination is done on a pixel-by-pixel basis. One pixel is pulled at a time to result in the best combination for the overall image. The machine learning models look at the context of the image to determine where they belong on the image frequency spectrum. Sky and other broadly similar high frequency areas, skin tones in the medium frequency zone and high frequency items like clothing, foilage etc.

The system then pulls structure and tonality from one image or another based on ratios. 

The overall result, Apple says, results in better skin transitions, better clothing detail and better crispness at the edges of moving subjects.

There is currently no way to turn off the Deep Fusion process but, because the ‘over crop’ feature of the new cameras uses the Ultra Wide a small ‘hack’ to see the difference between the images is to turn that on, which will disable Deep Fusion as it does not use the Ultra Wide lens.

The Deep Fusion process requires around 1 second for processing. If you quickly shoot and then tap a preview of the image, it could take around a half second for the image to update to the new version. Most people won’t notice the process happening at all.

As to how it works IRL? We’ll test and get back to you as Deep Fusion becomes available

01 Oct 2019

Skydio’s second-gen ‘self-flying’ drone is faster, smaller and half the price

There weren’t many scenarios where consumer were deciding between buying the original Skydio R1 autonomous drone and something from DJI. The R1 was the second drone in your arsenal, it was a couple grand and it gave users an experience that was technically impressive, occasionally useful and a little bizarre.

It was exciting but it was still one hell of a niche. The startup had raised $70 million in funding from Andreessen Horowitz, IVP and Playground Global to create a different kind of autonomous drone.

While the first-generation looked and felt like a prototype, the startup’s soon-to-be-released Skydio 2 meets the functionality desires one would have for their primary drone. The drone is half the price at $999, faster, much smaller and more portable, has longer battery life and can be flown more conventionally with optional accessories.

The biggest issue with the R1 was that its standout autonomous mode was the overpowering default. Here you were buying a $2,000 product and once it got to be dusk, it wouldn’t even get off the ground. You were limited in distance, limited in speed, and you were ultimately stuck with a device that could handle edge cases but stumbled on the basics. It enabled shots you could never think of doing — barreling barreling down a mountainside on a snowboard with the drone hot on your tail — but you couldn’t easily pilot the drone for sweeping panoramic shots, instead having to rely on the baked-in cinematic movements called “dronies.”

The Skydio 2 is still meant to primarily be an autonomous companion; the $149 controller is a separate accessory as is the $149 beacon which lets the drone fly at greater distance and track users more accurately. The drone has a 200m flight range to the phone, a 1.5km range to the beacon and a 3.5m range to the controller.

front open altScreen

It can do all of this much faster now and for a bit longer. The top speed now maxes out 36 miles per hour, compared to 26 miles per hour on the first-generation. The battery life sits at 23 minutes, which still falls short of what DJI’s Mavic 2 is capable of, but is an improvement over Skydio’s previous-generation.

I had the chance to fly the drone around with the new controller and this drone is perhaps at its most impressive when you forget how smart it is. Piloting the Skydio 2 straight for a group of trees with the controller — a nightmare for even seasoned drone pilots — is a cakewalk as the drone finds its own way around branches and trunks using your guidance to move forward as its key objective while figuring out the details on its own. For novice fliers that are never going to be experts, this is pretty priceless functionality and takes plenty of the nerves out of the process — in my brief demo at least.

The consumer drone market has plenty up against it, but at this price point, the Skydio 2 seems to make a decent sell to a wider swath of consumers than its leading competitors.

skydio2lifestyle 19

The drone is doing more with less, it now has just 6 onboard tracking cameras, compared to the 12 its first generation had. When it comes to the image quality on the Skydio 2’s non-tracking gimbal-stabilized camera, everything seemed up to snuff for the 4K 60fps but I’ll have to spend some more time with the drone before I can judge how things look.

When you stare down the realities of the drone market, Skydio has built an incredibly competitive drone that’s coming in at a more palatable price point. The startup’s first iteration was an experiment for action camera enthusiasts, the Skydio 2 could shift the consumer drone market in a way that few have in DJI’s world.

The $999 drone is launching in November in limited quantities. The company is taking $100 reservations ahead of launch over at its website now. The startup says buyers of its first drone, which launched early last year, will be able to get the Skydio 2 at a “significantly discounted price.”

01 Oct 2019

Auto workers’ strike pushes GM losses past $1 billion

The workers strike against General Motors — now in its third week — has cost the automaker more than $1 billion during the third quarter, according to a research note from J.P. Morgan analyst Ryan Brickman.

And those losses are accelerating with each passing week. GM lost about $480 million during the first week of the strike and another $575 million in the second, according to Brickman. GM is losing about $82 million of potential profit in North America every day.

TechCrunch will update the article if GM responds to a request for comment.

The effects of the production stoppage, which began Sept. 16 when 49,000 United Auto Workers went on strike, is causing a ripple effect through the Detroit automaker’s global operations. AP reported Tuesday that GM has shut down its pickup truck and transmission factories in Silao, Mexico, affecting 6,000 workers there. GM has also had to close an engine factory in Mexico and an assembly plant in Canada because of the strike.

“GM’s US production stopped immediately when the UAW [United Auto Workers] walked off the job on September 16 and we estimate its Canadian and Mexican facilities became progressively impacted throughout the first week,” Brinkman wrote in his research note this week.

Jefferies analyst Philippe Houchois also weighed in this week noting that the strike could restrict GM’s ability to make investments.

While pay, benefits and the status of temporary workers are the primary drivers of the strike, so are concerns about changes within the automaker towards electrification. GM and the rest of the automotive industry are pouring money into developing electric vehicles. But this shift is also affecting workers because electric vehicles, which require fewer parts, are easier to build. The UAW has said the shift from gas to electric engines could lead to a loss of 35,000 jobs over the next few years, according to a research study conduct by the union and recently noted by CNBC.

Last November GM CEO and Chairman Mary Barra announced plans to cut more than 14,000 jobs in North America, shutter factories and eliminate several car models in an effort to transform into a nimble company focused on high-margin SUVs, crossovers and trucks and investments in future products like electric and autonomous vehicles.

The actions were meant to safeguard the automaker from an expected downturn in the U.S. market and increase GM’s annual free cash flow by about $6 billion. But it has also caused discontent and concern among workers.

01 Oct 2019

UPS gets FAA approval to operate an entire drone delivery airline

UPS announced today that it is the first to receive the official nod from the Federal Aviation Administration (FAA) to operate a full “drone airline,” which will allow it to expand its current small drone delivery service pilots into a country-wide network.

In its announcement of the news, UPS said that it will start by building out its drone delivery solutions specific to hospital campuses nationwide in the U.S., and then to other industries outside of healthcare.

UPS racks up a number of firsts as a result of this milestone, thanks to how closely its been working with the FAA throughout its development and testing process for drone deliveries. As soon as it was awarded the certification, it did a delivery for WakeMed hospital in Raleigh, N.C. using a Matternet drone, and it also became the first commercial operator to perform a drone delivery for an actual paying customer outside of line of sight thanks to an exemption it received to do this from the government.

This certification, officially titled FAA’s “Part 135 Standard certification,” offers far-reaching and broad license to companies who attain it – much more freedom than any commercial drone operation has had previously in the U.S. Here’s a good summary of just how broad UPS can operate under its new designation:

The FAA’s Part 135 Standard certification has no limits on the size or scope of operations. It is the highest level of certification, one that no other company has attained. UPS Flight Forward’s certificate permits the company to fly an unlimited number of drones with an unlimited number of remote operators in command. This enables UPS to scale its operations to meet customer demand. Part 135 Standard also permits the drone and cargo to exceed 55 pounds and fly at night, previous restrictions governing earlier UPS flights.

Obviously, it’s a huge win for UPS Flight Forward, which is the dedicated UPS subsidiary the company announced it had formed back in July to focus entirely on building out the company’s drone delivery business. But there’s still a lot left to do before you can expect UPS drones to be a regular fixture, or even at all visible in the lives of the average American.

The courier outlined its next steps from here, which include expanding service to new hospitals and medial facilities, building out ground-based detection and avoidance systems for its drone fleets, building a central operation control facility and partnering with new drone makers to create different kinds of delivery drones for different payloads.

01 Oct 2019

Streamlit launches open source machine learning application development framework

Streamlit, a new machine learning startup from industry veterans, who worked at GoogleX and Zoox, launched today with a $6 million seed investment and a flexible new open source tool to make it easier for machine learning engineers to create custom applications to interact with the data in their models.

The seed round was led by Gradient Ventures with participation from Bloomberg Beta. A who’s who of solo investors also participated including Color Genomics co-founder Elad Gil, #Angels founder Jana Messerschmidt, Y Combinator partner Daniel Gross, Docker co-founder Solomon Hykes and Insight Data Science CEO Jake Klamka.

As for the product, Streamlit co-founder Adrien Treuille, says as machine learning engineers he and his co-founders were in a unique position to understand the needs of engineers and build a tool to meet their requirements. Rather than building a one-size-fits-all tool, the key was developing a solution that was flexible enough to serve multiple requirements, depending on the nature of the data the person is working with.

“I think that Streamlit actually has, I would say, a unique position in this market. While most companies are basically trying to systemize some part of the machine learning workflow, we’re giving engineers these sort of Lego blocks to build whatever they want,” Treuille explained.

self driving 1

Customized self-driving car data application built with Streamlit that enables machine learning engineers to interact with the data.

Treuille says that highly trained machine learning engineers that have unique set of skills actually end up spending an inordinate amount of their time building tools to understand the vast amounts of data they have. Streamlit is trying to help them build these tools faster using the kind of programming tools they are used to work with.

He says that with a few lines of code, a machine learning engineer can very quickly begin building tools to understand the data and help them interact with it in whatever way makes sense based on the type of data. That may mean building a set of sliders with different variables to interact with the data, or simply creating tables with subsets of data that make sense to the engineer.

Treuille says that this toolset has the potential to dramatically transform the way machine learning engineers work with the data in their models. “As people who are machine learning engineers and have seen this and know what it’s like to go through these challenges, It was really exciting for us to say, there’s a better way of doing this and not just a little bit better, but something that will turn a project that would have taken four weeks and 15,000 lines of code into something that you can do in an afternoon.”

The toolkit is available on GitHub for download starting today.