Author: azeeadmin

04 Dec 2018

Facebook ends platform policy banning apps that copy its features

Facebook will now freely allow developers to build competitors to its features upon its own platform. Today Facebook announced it will drop Platform Policy section 4.1, which stipulates “Add something unique to the community. Don’t replicate core functionality that Facebook already provides.”

Facebook had previously enforced that policy selectively to hurt competitors that had used its Find Friends or viral distribution features. Apps like Vine, Voxer, MessageMe, Phhhoto and more had been cut off from Facebook’s platform for too closely replicating its video, messaging or GIF creation tools. Find Friends is a vital API that lets users find their Facebook friends within other apps.

The move will significantly reduce the risk of building on the Facebook platform. It could also cast it in a better light in the eyes of regulators. Anyone seeking ways Facebook abuses its dominance will lose a talking point. And by creating a more fair and open platform where developers can build without fear of straying too close to Facebook’s history or road map, it could reinvigorate its developer ecosystem.

A Facebook spokesperson provided this statement to TechCrunch:

We built our developer platform years ago to pave the way for innovation in social apps and services. At that time we made the decision to restrict apps built on top of our platform that replicated our core functionality. These kind of restrictions are common across the tech industry with different platforms having their own variant including YouTube, Twitter, Snap and Apple. We regularly review our policies to ensure they are both protecting people’s data and enabling useful services to be built on our platform for the benefit of the Facebook community. As part of our ongoing review we have decided that we will remove this out of date policy so that our platform remains as open as possible. We think this is the right thing to do as platforms and technology develop and grow.

The change comes after Facebook locked down parts of its platform in April for privacy and security reasons in the wake of the Cambridge Analytica scandal. Diplomatically, Facebook said it didn’t expect the change to impact its standing with regulators but it’s open to answering their questions.

Earlier in April, I wrote a report on how Facebook used Policy 4.1 to attack competitors it saw gaining traction. The article, “Facebook shouldn’t block you from finding friends on competitors,” advocated for Facebook to make its social graph more portable and interoperable so users could decamp to competitors if they felt they weren’t treated right in order to coerce Facebook to act better.

The policy change will apply retroactively. Old apps that lost Find Friends or other functionality will be able to submit their app for review and, once approved, will regain access.

Friend lists still can’t be exported in a truly interoperable way. But at least now Facebook has enacted the spirit of that call to action. Developers won’t be in danger of losing access to that Find Friends Facebook API for treading in its path.

Below is an excerpt from our previous reporting on how Facebook has previously enforced Platform Policy 4.1 that before today’s change was used to hamper competitors:

  • Voxer was one of the hottest messaging apps of 2012, climbing the charts and raising a $30 million round with its walkie-talkie-style functionality. In early January 2013, Facebook copied Voxer by adding voice messaging into Messenger. Two weeks later, Facebook cut off Voxer’s Find Friends access. Voxer CEO Tom Katis told me at the time that Facebook stated his app with tens of millions of users was a “competitive social network” and wasn’t sharing content back to Facebook. Katis told us he thought that was hypocritical. By June, Voxer had pivoted toward business communications, tumbling down the app charts and leaving Facebook Messenger to thrive.
  • MessageMe had a well-built chat app that was growing quickly after launching in 2013, posing a threat to Facebook Messenger. Shortly before reaching 1 million users, Facebook cut off MessageMe‘s Find Friends access. The app ended up selling for a paltry double-digit millions price tag to Yahoo before disintegrating.
  • Phhhoto and its fate show how Facebook’s data protectionism encompasses Instagram. Phhhoto’s app that let you shoot animated GIFs was growing popular. But soon after it hit 1 million users, it got cut off from Instagram’s social graph in April 2015. Six months later, Instagram launched Boomerang, a blatant clone of Phhhoto. Within two years, Phhhoto shut down its app, blaming Facebook and Instagram. “We watched [Instagram CEO Kevin] Systrom and his product team quietly using PHHHOTO almost a year before Boomerang was released. So it wasn’t a surprise at all . . . I’m not sure Instagram has a creative bone in their entire body.”
  • Vine had a real shot at being the future of short-form video. The day the Twitter-owned app launched, though, Facebook shut off Vine’s Find Friends access. Vine let you share back to Facebook, and its six-second loops you shot in the app were a far cry from Facebook’s heavyweight video file uploader. Still, Facebook cut it off, and by late 2016, Twitter announced it was shutting down Vine.
04 Dec 2018

Faraday Future furloughs more employees as cash woes continue

For a company with such a forward looking name, the road ahead has been looking pretty bleak for Faraday Future. Debts, fleeing executives and payroll delays have clouded the last year of the automaker’s history — with ultimately very little to show for it.

Faraday took to Twitter today to note the latest in its continued slide brought on by a “financial crisis.” Due to a cashing crunch, the car maker will be furloughing employees — at least 250, according to The Verge. That’s a fairly significant portion of the company’s headcount. In October, it made a sizable cut, reducing staff from 1,000 to 600 employees.

The company places the blame for this latest round of death by 1,000 cuts firmly at the feet of Evergrande Health, claiming that the investor is “refusing to make its scheduled payments” and in doing so, “unequivocally harmed FF employees worldwide, our suppliers, our partners and all of our reservation holders.”

The two parties have been at odds in recent months, following a $2 billion bail out from Evergrande over the summer. In October, the investor accused Faraday of “manipulating” and attempting to break an agreement with previous backer, Smart King.

In the new letter, Faraday says it hopes to have the funding issues resolved in the next two to three months, though given how the year has gone for the company, it will likely be tough find another eager investor, if things don’t get better with Evergrande. The company also says it’s committed to delivering the FF 91, but that too, will be a tough road. 

04 Dec 2018

Elowan is the plantdroid you’ve been looking for

With Big Dog busy pulling Santa’s sleigh what horrible robotic hybrid is left to haunt our dreams? How about Elowan!

Elowan is a project out of the MIT Media Lab and it’s essentially a mobile houseplant. The plant sends signals to a wheeled transport that rolls back and forth trying to find light. Created by Harpreet Sareen, the robot senses changes in the plant’s electrochemical reactions to tell when it is thirsty for light or even when it is being hung the wrong way.

Elowan is an attempt to demonstrate what augmentation of nature could mean. Elowan’s robotic base is a new symbiotic association with a plant. The agency of movement rests with the plant based on its own bio-electrochemical signals, the language interfaced here with the artificial world.

These in turn trigger physiological variations such as elongation growth, respiration, and moisture absorption. In this experimental setup, electrodes are inserted into the regions of interest (stems and ground, leaf and ground). The weak signals are then amplified and sent to the robot to trigger movements to respective directions.

Such symbiotic interplay with the artificial could be extended further with exogenous extensions that provide nutrition, growth frameworks, and new defense mechanisms.

Will this plant eventually learn to ride towards the scent of blood and eat us? Possibly! To paraphrase Ian Malcolm, our scientists were so preoccupied with whether or not they could make a mobile houseplant, they didn’t stop to think if they should. I, for one, welcome our cyborg houseplant overlords.

04 Dec 2018

Commercial insurtech is like an exclusive club — and Google and Amazon aren’t invited

Tech companies and VCs in the insurance space have probably read many of the news articles about Amazon and Google entering insurance (herehere and here). Given their nearly unlimited resources, this may be intimidating to some in the industry. Whether one views these moves as a threat, a welcomed development or something in-between, it’s important to note that both Google and Amazon have focused almost exclusively on personal lines, which is only one aspect of insurance.

There are many reasons for this — not least of which is Google and Amazon’s desire to add value to their customers who are, for the most part, consumers. Because the customer always comes first, most expect Amazon and Google to stay firmly focused on personal lines.

There is, however, another massive tranche of insurance that is ready for innovation: commercial lines. Commercial insurance is often frighteningly complex and requires too much inside information for tech companies to find it attractive. For the time being, when it comes to commercial insurance, Amazon and Google are firmly on the outside looking in. This competitive moat is one of many reasons that interest in commercial insurtech is heating up.

At the same time, there have been shockingly few commercial-focused startups so far, compared to personal lines companies. According to a recent report from Deloitte, just over $57 million was directed to commercial insurtechs in the first half of 2018 — or 6.6 percent of total insurtech-related funding in that period. In 2017, Deloitte reported a far higher proportion of 11.4 percent. Meanwhile, our analysis at XL Innovate, based on CB Insights data, shows that over $1 billion has been invested in companies that are addressing commercial insurance since 2015, which equates to roughly 10 percent of total insurtech investment.

So, regardless of how you slice it, commercial insurtech startups have been woefully underfinanced relative to insurtech companies addressing personal lines, distribution and other areas. As a result, commercial insurtech is heavily under-penetrated relative to the broader insurtech movement.

Why is this?

The story of the first insurtech wave is similar to many stories across the tech landscape: New ventures were driven by entrepreneurs from outside the industry looking to disrupt what they knew (auto insurance, renters/homeowners insurance or distribution). It’s natural, then, that initial efforts have focused on individual policies and more evident aspects of the insurance market.

Even existing commercial ventures have been concentrated in more obvious areas like distribution and auto. In fact, since 2015, those two categories account for more than half of commercial insurtech funding to date. Nearly all of the current major commercial ventures are in these spaces. Here are some of the highlights:

  • Distribution: Next Insurance, a full-stack commercial insurer, has raised $130 million. CoverHound and Policygenius, meanwhile, have raised just north of $50 million each. Distribution, in particular, has accounted for half the dollars invested across insurtech. Unsurprisingly, this is a trend that persists in the commercial space.
  • Auto: Nauto has raised more than $174 million, and players such as Nexar and ZenDrive are making noise in their own right on the financing side ($44 million for Nexar and $20 million for ZenDrive).

Only a few startups are looking at more complex areas, such as providing higher-quality property intelligence for commercial underwriters. Cape Analytics, for example, uses computer vision to extract information from aerial imagery automatically. This gives insurers access to recent, impactful data for any address across the nation, at time of rating and underwriting, and allows them to better evaluate risk throughout a policy lifecycle.

Why does this matter? Well, for example, according to Cape data, 8 percent of roofs in the U.S. are of poor or severe quality. Buildings with bad-quality roofs have a 50 percent higher loss potential than those with high-quality roofs — they have both a higher likelihood of submitting a claim and, if a claim is submitted, the loss payout is larger. For insurers, knowing the roof condition of a commercial building before providing a quote can help the insurer price policies more accurately and avoid heavy losses. This kind of data is indispensable to commercial insurers, but was unavailable until now.

Insurance insiders who are intimately aware of the current status quo should be excited.

Or take Windward, a marine risk analytics company, as another example. Windward is able to track every ship’s operational profile and can provide insights on the ship’s geography, weather, port visits, management and navigation. On a practical basis, this means Windward can track whether a ship is sailing at dangerous depths at night, as vessels traveling for longer periods at night at dangerous depths are 2.6x more likely to have contact accidents the following year. Windward can also track when a ship passes through traffic lanes. Vessels traveling for long periods in congested traffic lanes are 2x more likely to have collision accidents the following year. This is the kind of information marine insurers need to have on hand.

Still, there is way more headroom in the space.

Commercial has enormous potential given the magnitude of the market and relative lack of awareness of problems outside of insurance. Global commercial property and casualty insurance premiums were worth approximately $730 billion in 2017. By 2021, it will rise to almost $900 billion. Meanwhile, only insiders truly understand facultative reinsurance; or understand how hull insurance is written and who writes it; or how to improve large commercial property insurance. If an entrepreneur comes from outside the industry, these are challenging markets and workflows to understand, let alone disrupt or improve.

On the other hand, insurance insiders who are intimately aware of the current status quo should be excited. The insurtech space now needs these insiders to become more involved, start new ventures, raise capital and help identify and solve the most meaningful problems in commercial insurance. Insiders, whether they be underwriters, actuaries, claims professionals or anyone else who has spent time within the industry, know the pain points, the pitfalls and the potential solutions.

Tackling the commercial space will be more challenging. Assets are larger and volume smaller, meaning learnings will be slower to come by and technologies like AI less effective in the short term. For example, if an insurer is underwriting 350 marine policies and there are only 15 claims per year, when is there enough data to drive statistically significant findings? Commercial lines still rely heavily on human judgment and manual processing. This is not a problem in personal lines because of the immense volume of data that can be harnessed and analyzed. So, although the opportunity is fantastic, it is important to keep in mind that the timeline to impact will likely be longer.

Those involved in the insurance technology wave have many reasons to be excited about commercial insurance, but patience will be key as new ventures look to tackle longstanding issues and as the space heats up. Luckily for entrepreneurs with a unique understanding of the industry, tech companies like Amazon and Google are not in a position to threaten the space for the foreseeable future.

04 Dec 2018

Electric scooters are flimsy, so Superpedestrian is making more robust ones to sell to operators

If you’ve been on an electric scooter from Bird, Lime, Lyft, JUMP via Uber, Skip, Scoot or others, you’re probably familiar with that feeling of impending doom. Superpedestrian, makers of the Copenhagen Wheel, is today emerging with a sturdier, safer and smarter electric scooter. But instead of operating a shared electric scooter network, the plan is to sell these scooters to the players mentioned above.

Superdestrian’s main offering is a sturdier scooter with self-diagnostic and remote management capabilities. Superpedestrian says its scooters can maintain themselves from nine to 18 months at a time, while other scooters break down more often, the company says.

Superpedstrian’s scooters are equipped to self-diagnose issues that involve components, the motherboard, motor controller, land management system, batteries and more. In total, Superpedstrian can detect about 100 different things that could be wrong with it.

“So the system is smart enough to identify those common things that take place common risks and hazards and then it apply self-protection which means it protects before damage occurs,” Superpedestrian founder and CEO Assaf Biderman told TechCrunch. “If the batteries are out of balance, and there’s a heating issue in part of the cells it balances itself if it if temperatures continue to rise, it attenuated consume less energy, it never lets it get to the point where it catches fire.”

These internal systems are designed to reduce failures, decrease the amount of time human operators need to spend troubleshooting scooters, and ultimately increase the supply of scooters available at any given time.

Superpedestrian says it already has a big player on board, though, Biderman would not disclose which one. What he would share is that the first deployment will happen in Q1 2019.

04 Dec 2018

Samsung fakes test photo by using a stock DSLR image

Samsung’s Malaysian arm has some explaining to do. The company, in an effort to show off the Galaxy A8 Star’s amazing photo retouching abilities, used a cleverly-shot portrait, modified it, and then ostensibly passed it off as one taken by the A8.

The trouble began when Serbian photographer Dunja Djudjic noticed someone had bought one of her photos from a service called EyeEm that supplies pictures to Getty Images, a renowned photo reseller. Djudjic, curious as to the buyer, did a quick reverse search and found her image – adulterated to within an inch of its life – on Samsung’s Malaysian product page.

Djudjic, for her part, was a good sport.

My first reaction was to burst out into laughter. Just look at the Photoshop job they did on my face and hair! I’ve always liked my natural hair color (even though it’s turning gray black and white), but I guess the creator of this franken-image prefers reddish tones. Except in the eyes though, where they removed all of the blood vessels.

Whoever created this image, they also cut me out of the original background and pasted me onto a random photo of a park. I mean, the original photo was taken at f/2.0 if I remember well, and they needed the “before” and “after” – a photo with a sharp background, and another one where the almighty “portrait mode” blurred it out. So Samsung’s Photoshop master resolved it by using a different background.

This move follows a decision by Huawei to pull the same stunt with a demo photo in August.

To be fair, Samsung warned us this would happen. “The contents within the screen are simulated images and are for demonstration purposes only,” they write in the fine print, way at the bottom of the page. Luckily for Djudjic, Samsung paid her for her photo.

04 Dec 2018

Google ‘incognito’ search results still vary from person to person, DDG study finds

A study of Google search results by anti-tracking rival DuckDuckGo has suggested that escaping the so-called ‘filter bubble’ of personalized online searches is a perniciously hard problem for the put upon Internet consumer who just wants to carve out a little unbiased space online, free from the suggestive taint of algorithmic fingers.

DDG reckons it’s not possible even for logged out users of Google search, who are also browsing in Incognito mode, to prevent their online activity from being used by Google to program — and thus shape — the results they see.

DDG says it found significant variation in Google search results, with most of the participants in the study seeing results that were unique to them — and some seeing links others simply did not.

Results within news and video infoboxes also varied significantly, it found.

While it says there was very little difference for logged out, incognito browsers.

“It’s simply not possible to use Google search and avoid its filter bubble,” it concludes.

Google has responded by counter-claiming that DuckDuckGo’s research is “flawed”.

Degrees of personalization

DuckDuckGo says it carried out the research to test recent claims by Google to have tweaked its algorithms to reduce personalization.

A CNBC report in September, drawing on access provided by Google, letting the reporter sit in on an internal meeting and speak to employees on its algorithm team, suggested that Mountain View is now using only very little personalization to generate search results.

A query a user comes with usually has so much context that the opportunity for personalization is just very limited,” Google fellow Pandu Nayak, who leads the search ranking team, told CNBC this fall.

On the surface, that would represent a radical reprogramming of Google’s search modus operandi — given the company made “Personalized Search” the default for even logged out users all the way back in 2009.

Announcing the expansion of the feature then Google explained it would ‘customize’ search results for these logged out users via an ‘anonymous cookie’:

This addition enables us to customize search results for you based upon 180 days of search activity linked to an anonymous cookie in your browser. It’s completely separate from your Google Account and Web History (which are only available to signed-in users). You’ll know when we customize results because a “View customizations” link will appear on the top right of the search results page. Clicking the link will let you see how we’ve customized your results and also let you turn off this type of customization.

A couple of years after Google threw the Personalized Search switch, Eli Pariser published his now famous book describing the filter bubble problem. Since then online personalization’s bad press has only grown.

In recent years concern has especially spiked over the horizon-reducing impact of big tech’s subjective funnels on democratic processes, with algorithms carefully engineered to keep serving users more of the same stuff now being widely accused of entrenching partisan opinions, rather than helping broaden people’s horizons.

Especially so where political (and politically charged) topics are concerned. And, well, at the extreme end, algorithmic filter bubbles stand accused of breaking democracy itself — by creating highly effective distribution channels for individually targeted propaganda.

Although there have also been some counter claims floating around academic circles in recent years that imply the echo chamber impact is itself overblown. (Albeit sometimes emanating from institutions that also take funding from tech giants like Google.)

As ever, where the operational opacity of commercial algorithms is concerned, the truth can be a very difficult animal to dig out.

Of course DDG has its own self-interested iron in the fire here — suggesting, as it is, that “Google is influencing what you click” — given it offers an anti-tracking alternative to the eponymous Google search.

But that does not merit an instant dismissal of a finding of major variation in even supposedly ‘incognito’ Google search results.

DDG has also made the data from the study downloadable — and the code it used to analyze the data open source — allowing others to look and draw their own conclusions.

It carried out a similar study in 2012, after the earlier US presidential election — and claimed then to have found that Google’s search had inserted tens of millions of more links for Obama than for Romney in the run-up to that.

It says it wanted to revisit the state of Google search results now, in the wake of the 2016 presidential election that installed Trump in the White House — to see if it could find evidence to back up Google’s claims to have ‘de-personalized’ search.

For the latest study DDG asked 87 volunteers in the US to search for the politically charged topics of “gun control”, “immigration”, and “vaccinations” (in that order) at 9pm ET on Sunday, June 24, 2018 — initially searching in private browsing mode and logged out of Google, and then again without using Incognito mode.

You can read its full write-up of the study results here.

The results ended up being based on 76 users as those searching on mobile were excluded to control for significant variation in the number of displayed infoboxes.

Here’s the topline of what DDG found:

Private browsing mode (and logged out):

  • “gun control”: 62 variations with 52/76 participants (68%) seeing unique results.
  • “immigration”: 57 variations with 43/76 participants (57%) seeing unique results.
  • “vaccinations”: 73 variations with 70/76 participants (92%) seeing unique results.

‘Normal’ mode:

  • “gun control”: 58 variations with 45/76 participants (59%) seeing unique results.
  • “immigration”: 59 variations with 48/76 participants (63%) seeing unique results.
  • “vaccinations”: 73 variations with 70/76 participants (92%) seeing unique results.

DDG’s contention is that truly ‘unbiased’ search results should produce largely the same results.

Yet, by contrast, the search results its volunteers got served were — in the majority — unique. (Ranging from 57% at the low end to a full 92% at the upper end.)

“With no filter bubble, one would expect to see very little variation of search result pages — nearly everyone would see the same single set of results,” it writes. “Instead, most people saw results unique to them. We also found about the same variation in private browsing mode and logged out of Google vs. in normal mode.”

“We often hear of confusion that private browsing mode enables anonymity on the web, but this finding demonstrates that Google tailors search results regardless of browsing mode. People should not be lulled into a false sense of security that so-called “incognito” mode makes them anonymous,” DDG adds.

Google initially declined to provide a statement responding to the study, telling us instead that several factors can contribute to variations in search results — flagging time and location differences among them.

It even suggested results could vary depending on the data center a user query was connected with — potentially introducing some crawler-based micro-lag.

Google also claimed it does not personalize the results of logged out users browsing in Incognito mode based on their signed-in search history.

However the company admited it uses contextual signals to rank results even for logged out users (as that 2009 blog post described) — such as when trying to clarify an ambiguous query.

In which case it said a recent search might be used for disambiguation purposes. (Although it also described this type of contextualization in search as extremely limited, saying it would not account for dramatically different results.)

But with so much variation evident in the DDG volunteer data, there seems little question that Google’s approach very often results in individualized — and sometimes highly individualized — search results.

Some Google users were even served with more or fewer unique domains than others.

Lots of questions naturally flow from this.

Such as: Does Google applying a little ‘ranking contextualization’ sound like an adequately ‘de-personalized’ approach — if the name of the game is popping the filter bubble?

Does it make the served results even marginally less clickable, biased and/or influential?

Or indeed any less ‘rank’ from a privacy perspective… ?

You tell me.

Even the same bunch of links served up in a slightly different configuration has the potential to be majorly significant since the top search link always gets a disproportionate chunk of clicks. (DDG says the no.1 link gets circa 40%.)

And if the topics being Google-searched are especially politically charged even small variations in search results could — at least in theory — contribute to some major democratic impacts.

There is much to chew on.

DDG says it controlled for time- and location-based variation in the served search results by having all participants in the study carry out the search from the US and do so at the very same time.

While it says it controlled for the inclusion of local links (i.e to cancel out any localization-based variation) by bundling such results with a localdomain.com placeholder (and ‘Local Source’ for infoboxes).

Yet even taking steps to control for space-time based variations it still found the majority of Google search results to be unique to the individual.

“These editorialized results are informed by the personal information Google has on you (like your search, browsing, and purchase history), and puts you in a bubble based on what Google’s algorithms think you’re most likely to click on,” it argues.

Google would counter argue that’s ‘contextualizing’, not editorializing.

And that any ‘slight variation’ in results is a natural property of the dynamic nature of its Internet-crawling search response business.

Albeit, as noted above, DDG found some volunteers did not get served certain links (when others did), which sounds rather more significant than ‘slight difference’.

In the statement Google later sent us it describes DDG’s attempts to control for time and location differences as ineffective — and the study as a whole as “flawed” — asserting:

This study’s methodology and conclusions are flawed since they are based on the assumption that any difference in search results are based on personalization. That is simply not true. In fact, there are a number of factors that can lead to slight differences, including time and location, which this study doesn’t appear to have controlled for effectively.

One thing is crystal clear: Google is — and always has been — making decisions that affect what people see.

This capacity is undoubtedly influential, given the majority marketshare captured by Google search. (And the major role Google still plays in shaping what Internet users are exposed to.)

That’s clear even without knowing every detail of how personalized and/or customized these individual Google search results were.

Google’s programming formula remains locked up in a proprietary algorithm box — so we can’t easily (and independently) unpick that.

And this unfortunate ‘techno-opacity’ habit offers convenient cover for all sorts of claim and counter-claim — which can’t really now be detached from the filter bubble problem.

Unless and until we can know exactly how the algorithms work to properly track and quantify impacts.

Also true: Algorithmic accountability is a topic of increasing public and political concern.

Lastly, ‘trust us’ isn’t the great brand mantra for Google it once was.

So the devil may yet get (manually) unchained from all these fuzzy details.

04 Dec 2018

Qualcomm announces the Snapdragon 855 and its new under-display fingerprint sensor

This week, Qualcomm is hosting press and analysts on Maui for its annual Snapdragon Summit. Sadly, we’re not there, but a couple of weeks ago, Qualcomm gave us a preview of the news. There’ll be three days of news and the company decided to start with a focus on 5G, as well as a preview of its new Snapdragon 855 mobile platform. In addition, the company announced its new ultrasonic fingerprint solution for sensors that can sit under the display.

It’ll probably still be a while before there’ll be a 5G tower in your neighborhood, but after years of buzz, it’s fair to say that we’re now getting to the point where 5G is becoming real. Indeed, AT&T and Verizon are showing off live 5G networks on Maui this week. Qualcomm described its event as the “coming out party for 5G,” though I’m sure we’ll hear from plenty of other players who will claim the same in the coming months.

In the short term, what’s maybe more interesting is that Qualcomm also announced its new flagship 855 mobile platform today. While the company didn’t release all of the details yet, it stressed that the 855 is “the world’s first commercial mobile platform supporting multi-gigabit 5G.”

The 855 also features a new multi-core AI engine that promises up to 3x better AI performance compared to its previous mobile platform, as well as specialized computer vision silicon for enhanced computational photography (think something akin to Google’s Night Light) and video capture.

The company also briefly noted that the new platform has been optimized for gaming. The product name for this is “Snapdragon Elite Gaming,” but details remain sparse. Qualcomm also continues to bet on AR (or “extended reality” as the company brands it).

The last piece of news is likely the most interesting here. Fingerprint sensors are now standard, even on mid-market phones. With its new 3D Sonic Sensors, Qualcomm promises an enhanced ultrasonic fingerprint solution that can sit under the display. In part, this is a rebranding of Qualcomm’s existing under-display sensor, but there’s some new technology here, too. The promise here is that the scanner will work, even if the display is very dirty or if the user installs a screen protector. Chances are, we’ll see quite a few new flagship phones in the next few months (Mobile World Congress is coming up quickly, after all) that will feature these new fingerprint scanners.

04 Dec 2018

Consumer electronics giant Samsung just became the world’s spendiest advertiser, bypassing Proctor & Gamble

Who rules the roost in the business world is very much in flux. We see it on a daily basis, with the mantle of “most valuable company” being passed between Apple, Amazon and Microsoft, depending on the day.

Another interesting data point that highlights some of the jockeying for dominance behind the scenes: For the first time, South Korea’s Samsung has beat out packaged goods company Proctor & Gamble in terms of the advertising dollars it’s putting to work. According to new data from AdAge, the consumer electronics and appliance maker splashed out $11.2 billion for advertising and sales promotion last year, compared with P&G, which spent an estimated $10.5 billion.

Samsung, which has a $300 billion market cap, may have spent so heavily in part to counter bad press, including around its Samsung Galaxy Note 7 phone, which had the unfortunate capability of spontaneously bursting into flames. Fighting Apple for constant mindshare isn’t cheap, either. For example, you may have noticed the flurry of anti-iPhone X ads that the company churned out ahead of the release of its Galaxy Note 9 unveiling — an effort to boost its bottom line after sales of its Galaxy S9 phone disappointed.

Samsung isn’t the only tech brand that’s pumping up the volume when it comes to ad spending. Amid the companies accelerating their ad budgets the most quickly are China’s biggest online retailer, Alibaba Group Holding, which reportedly more than doubled its 2017 advertising spending to $2.7 billion in 2017. And Alibaba is trailed, unsurprisingly, by one of its biggest rivals, Tencent Holdings, whose $2 billion in related spend last year was nearly double what it was in 2016.

Though the two are undisputed powerhouses in China, they’re currently locked in a battle for Southeast Asia and India, and it’s a costly war to wage.

Others in the tech world to dominate AdAge’s tally include Alphabet, Netflix and Amazon, which reportedly boosted their 2017 spending by 32 percent, 29 percent and 26 percent, respectively.

Altogether, says AdAge, the U.S. is home to 44 of the companies spending the most of marketing, followed by 13 companies in Japan, 10 companies in Germany and nine companies in France.

You can dig into more of its findings, including a look at the 100 companies and industries that currently spend the most on advertising, here.

04 Dec 2018

Cove.Tool wants to solve climate change one efficient building at a time

As the fight against climate change heats up, Cove.Tool is looking to help tackle carbon emissions one building at a time.

The Atlanta-based startup provides an automated big-data platform that helps architects, engineers and contractors identify the most cost-effective ways to make buildings compliant with energy efficiency requirements. After raising an initial round earlier this year, the company completed the final close of a $750,000 seed round. Since the initial announcement of the round earlier this month, Urban Us, the early-stage fund focused on companies transforming city life, has joined the syndicate comprised of Tech Square Labs and Knoll Ventures.

Helping firms navigate a growing suite of energy standards and options

Cove.Tool software allows building designers and managers to plug in a variety of building conditions, energy options, and zoning specifications to get to the most cost-effective method of hitting building energy efficiency requirements (Cove.Tool Press Image / Cove.Tool / https://covetool.com).

In the US, the buildings we live and work in contribute more carbon emissions than any other sector. Governments across the country are now looking to improve energy consumption habits by implementing new building codes that set higher energy efficiency requirements for buildings. 

However, figuring out the best ways to meet changing energy standards has become an increasingly difficult task for designers. For one, buildings are subject to differing federal, state and city codes that are all frequently updated and overlaid on one another. Therefore, the specific efficiency requirements for a building can be hard to understand, geographically unique and immensely variable from project to project.

Architects, engineers and contractors also have more options for managing energy consumption than ever before – equipped with tools like connected devices, real-time energy-management software and more-affordable renewable energy resources. And the effectiveness and cost of each resource are also impacted by variables distinct to each project and each location, such as local conditions, resource placement, and factors as specific as the amount of shade a building sees.

With designers and contractors facing countless resource combinations and weightings, Cove.Tool looks to make it easier to identify and implement the most cost-effective and efficient resource bundles that can be used to hit a building’s energy efficiency requirements.

Cove.Tool users begin by specifying a variety of project-specific inputs, which can include a vast amount of extremely granular detail around a building’s use, location, dimensions or otherwise. The software runs the inputs through a set of parametric energy models before spitting out the optimal resource combination under the set parameters.

For example, if a project is located on a site with heavy wind flow in a cold city, the platform might tell you to increase window size and spend on energy efficient wall installations, while reducing spending on HVAC systems. Along with its recommendations, Cove.Tool provides in-depth but fairly easy-to-understand graphical analyses that illustrate various aspects of a building’s energy performance under different scenarios and sensitivities.

Cove.Tool users can input granular project-specifics, such as shading from particular beams and facades, to get precise analyses around a building’s energy performance under different scenarios and sensitivities.

Democratizing building energy modeling

Traditionally, the design process for a building’s energy system can be quite painful for architecture and engineering firms.

An architect would send initial building designs to engineers, who then test out a variety of energy system scenarios over the course a few weeks. By the time the engineers are able to come back with an analysis, the architects have often made significant design changes, which then gets sent back to the engineers, forcing the energy plan to constantly be 1-to-3 months behind the rest of the building. This process can not only lead to less-efficient and more-expensive energy infrastructure, but the hectic back-and-forth can lead to longer project timelines, unexpected construction issues, delays and budget overruns.

Cove.Tool effectively looks to automate the process of “energy modeling.” The energy modeling looks to ease the pains of energy design in the same ways Building Information Modeling (BIM) has transformed architectural design and construction. Just as BIM creates predictive digital simulations that test all the design attributes of a project, energy modeling uses building specs, environmental conditions, and various other parameters to simulate a building’s energy efficiency, costs and footprint.

By using energy modeling, developers can optimize the design of the building’s energy system, adjust plans in real-time, and more effectively manage the construction of a building’s energy infrastructure. However, the expertise needed for energy modeling falls outside the comfort zones of many firms, who often have to outsource the task to expensive consultants.

The frustrations of energy system design and the complexities of energy modeling are ones the Cove.Tool team knows well. Patrick Chopson and Sandeep Ajuha, two of the company’s three co-founders, are former architects that worked as energy modeling consultants when they first began building out the Cove.Tool software.

After seeing their clients’ initial excitement over the ability to quickly analyze millions of combinations and instantly identify the ones that produce cost and energy savings, Patrick and Sandeep teamed up with CTO Daniel Chopson and focused full-time on building out a comprehensive automated solution that would allow firms to run energy modeling analysis without costly consultants, more quickly, and through an interface that would be easy enough for an architectural intern to use.

So far there seems to be serious demand for the product, with the company already boasting an impressive roster of customers that includes several of the country’s largest architecture firms, such as HGA, HKS and Cooper Carry. And the platform has delivered compelling results – for example, one residential developer was able to identify energy solutions that cost $2 million less than the building’s original model. With the funds from its seed round, Cove.Tool plans further enhance its sales effort while continuing to develop additional features for the platform.

Changing decision-making and fighting climate change

The value proposition Cove.Tool hopes to offer is clear – the company wants to make it easier, faster and cheaper for firms to use innovative design processes that help identify the most cost-effective and energy-efficient solutions for their buildings, all while reducing the risks of redesign, delay and budget overruns.

Longer-term, the company hopes that it can help the building industry move towards more innovative project processes and more informed decision-making while making a serious dent in the fight against emissions.

“We want to change the way decisions are made. We want decisions to move away from being just intuition to become more data-driven.” The co-founders told TechCrunch.

“Ultimately we want to help stop climate change one building at a time. Stopping climate change is such a huge undertaking but if we can change the behavior of buildings it can be a bit easier. Architects and engineers are working hard but they need help and we need to change.”