Year: 2018

28 Apr 2018

From dorm room to Starbucks, Rip Van Wafels is bringing Euro-inspired snack to the masses

Rip Pruisken waffled in college (we got that pun safely out of the way for now). He was a student in the Ivy League at Brown University, and had focused on academics for much of his life. His parents were physicists, and “I thought I would study some sort of cookie-cutter path of studying something that I would use post-college,” he explained. “I didn’t really consider entrepreneurship to be a viable option because I was still in that frame of mind.“

It was during a study trip to Italy that he had an epiphany. He was inside an Italian bookstore looking through business books when he suddenly realized that he had discovered a new passion. “If you can build stuff at a profit, you can build more stuff, and how cool is that? That was my aha moment,” he said.

Being an entrepreneur was one thing, but it wasn’t clear what Pruisken should sell. He had grown up in Amsterdam, where he used to eat stroopwafel, a snack composed of two thin waffle pastries melded together with a syrup center. During his freshman year, he had brought over a large quantity of them to school, and “all of my friends devoured them.” Remembering their popularity, “I literally started making them in my dorm in college, and started selling them on campus” during his junior year.

Selling ‘Van Wafels’ at Brown University

That was 2010. Today, Rip Van Wafels can be found in 12,000 Starbucks locations, and is a popular snack at tech companies, with some larger companies going through tens of thousands of units a week.

Their popularity comes from the intersection of a number of food trends. The snacks are made with natural ingredients and are healthy, with low calorie counts and limited sugar. Perhaps most importantly, they taste great, with different flavors that are designed to strike different moods (a chocolate wafel can work as dessert, while the strawberry wafel feels more like breakfast). The company currently produces eight flavors.

While the startup food company has had tremendous success, none of this was planned a decade ago when Pruisken got started. He worked with co-founder and co-CEO Marco De Leon, who was two years behind Pruisken at Brown University and was a good friend from Brazil looking for a change of pace from his Morgan Stanley internship.

They spent two years on campus trying to improve product marketing and the quality of the snack, which in hindsight was an important iteration process with what would become the company’s core consumer: well-educated and health-conscious tech workers.

The two stumbled into their market and stumbled into their name. “It started as Rip Wafel,” Pruisken explained, “and we got a cease and desist letter from Van’s,” which makes frozen food waffles among other products. A professor suggested Rip van Winkle, and that inspired the company’s current name. Pruisken himself was so enamored with the brand he changed his own name — Abhishek, which he had grown up with in Amsterdam — to Rip.

After much work, the two founders discovered that a tech company was particularly enjoying the snacks. “We realized we found this insight that one of our customers in the northeast was a tech company, and we talked to them and they said that it was the perfect treat that was an alternative to a candy bar,” he explained. So Pruisken borrowed the couch of his brother and started going door-to-door selling these Euro snacks to every tech company he could find, eventually 80 of them in one summer.

As he sold wafels, the same pattern would hold up. An order for one case would become two cases, and then 10 and then 20 of them. Eventually, word-of-mouth and distributor partnerships got the snack into the mini-kitchens of dozens of tech companies in San Francisco, as well as in Peet’s Coffee, Whole Foods, and ultimately Starbucks.

Pruisken believes the company’s success has come from iterating on the snack much as a software engineer might fiddle with JavaScript. “We have been reinventing our product every two years,” he said. “We are trying to make our product healthier while providing this very indulgent taste.” That includes experimenting with new ingredients like tapioca syrup and chickpea powder that can provide better nutrition at reduced sugar levels.

He sees the future of the company much the same way. “You can only cut the cycle time down by so much even if you do everything in-house. There are certain components you need to source like certain ingredients or packaging foam,” Pruisken explained. “The way to get ahead is to plan way ahead. So work on the things you want to launch in two years right now.” That includes a number of new flavors, as well as potentially adding products that touch on the brain-enhancing nootropics space.

Ultimately, Pruisken wants to redefine the category of packaged foods. “Convenient foods have been associated with cheaper, lower qualities and generally unhealthy foods in the US,” he said. “I think it would be great if that was elevated not just in the food space but broader.” From a foreign food in a Brown University dorm room to redefining the products on every grocery store shelf, stumbling has paid off for Rip Van, which is taking over the world one wafel at a time.

28 Apr 2018

Solving the affordability crisis one Chattanooga at a time

We all know the success of America’s leading startup hubs, cities like San Francisco, New York City, Boston and several others. Entrepreneurial talent, risk-seeking venture dollars, and dense human networks form an alchemy leading to wealth, jobs, and growth. The main streets and malls of the Midwest may be devastated, but you would never know that walking through the Hudson Yards development on the west side of Manhattan or in San Francisco’s SoMa neighborhood.

Despite their dizzying performance, the intense concentration of success in these zip codes does not bode well for the wider American economy. Geography and zoning ordinances prevent millions from migrating to these hubs, and for those lucky few who can make a living, high housing prices and other costs can place incredible stress on young families.

If we want to ameliorate inequality while lowering that cost burden, then it is up to cities across the nation to build up their own ecosystems and compete effectively. And those cities cannot just be the megalopolis global cities, but have to include the smaller urban cities that are often the core of regions outside the coast.

I have seen few cities sell themselves as effectively as Chattanooga, whose mayor Andy Berke visited TechCrunch’s offices recently. He was accompanied by Luke Marklin, the CEO of tech-enabled moving startup Bellhops, which has raised $27.2 million in venture capital according to Crunchbase.

The two have teamed up to share the gospel of Chattanooga, but their vision could also be the vision for the future of urban cities in America. The story at this point is well-told: the formerly down-on-its-luck Tennessee city brought its community together in the 1980s and 1990s to revitalize its downtown area. Following the 2008 global financial crisis, the city’s power utility started to revamp its infrastructure, and ultimately decided to build out a gigabit municipal fiber infrastructure for the city of about 175,000.

With that bandwidth in hand, the city has embarked on building a startup hub. It has invested significant resources into its downtown innovation district, and it has worked hard to program events and amenities that will attract a diverse and talented workforce. That concentration is intentional. “We can’t have our assets spread out in a city of our size,” Berke explained. “We need to juice up activity in that ecosystem so that it feels like an ecosystem.”

What makes Chattanooga competitive though, and ultimately an interesting case study of urban leadership, is the mayor’s and the city’s deep understanding of trade-offs. With just over 175,000 people, the city’s population is just slightly higher than the startup-focused Flatiron District in New York. Chattanooga, while a distinctive name, is also not the first city that people think of when they are considering locations to migrate to.

While the city has desirable outdoor and cultural amenities, Berke is realistic about how many people might consider Chattanooga as an adopted home. “We don’t need 10,000 engineers moving to Chattanooga to survive,” he said. “You have to grow more talent, because you don’t want to solely rely on the attraction piece.” He gave the example of connecting businesses with the leadership of the local university to ensure that the skills that graduates were learning lined up better with the needs of industry.

By focusing that training on local residents, startups that might otherwise flee to the Bay Area at the first sign of a VC investment decide to stay put. Berke, speaking about Marklin and Bellhops, said that “He is approaching 100 people and when it gets to 250, that creates tremendous wealth in our community, and most of that wealth gets reinvested in the area as opposed to other types of businesses.”

Chattanooga has become a competitive city. It has used its natural endowments, its institutions, its people, and its community to foster a new generation of its economy. That isn’t a panacea for all social or economic problems of course, but the city now has a base upon which it can continue to improve the quality of life for all of its residents and potentially some transplants as well.

James Fallows wrote something of a paean to local governments this week in his cover piece for The Atlantic. Fallows has spent the past few years investigating an interesting trend in American polling: “Even as national politics induces distrust and despair, most polls show rising faith in local governance.” The reason becomes obvious as he travels around the country. Local governments are shoring up their communities through engaged citizens, smart services, and a focus on bringing everyone together around the future of their homes. In short, they look a bit like Chattanooga.

I am a bit more skeptical than Fallows though of the strength of America’s local governments. While there are certainly success stories, there are also 307 American cities with populations above 100,000. Every single one of them should be focused on increasing their competitiveness using whatever resources — however meager — they have.

The key is that cities don’t become competitive with solutions, they become competitive through systems. Transforming an urban area’s built environment and economy is a process measured in decades, not months. Systems ensures that people and institutions are working together for the long-haul, so that when a mayor leaves office, the progress of a city isn’t suddenly halted.

Chattanooga shows what happens when a city can maintain that multi-decade focus on growth and revitalization, while also adapting itself to the realities of the talent inside its borders and the economy in the world at large. Chattanooga may never be as dense as New York City, but it can certainly find a seat at the economic table and be an attractive place to live. Ultimately, we need hundreds of Chattanoogas, urban cities with dynamic economies that offer affordable alternatives to the most expensive startup hub cities in the country.

28 Apr 2018

Solving the affordability crisis one Chattanooga at a time

We all know the success of America’s leading startup hubs, cities like San Francisco, New York City, Boston and several others. Entrepreneurial talent, risk-seeking venture dollars, and dense human networks form an alchemy leading to wealth, jobs, and growth. The main streets and malls of the Midwest may be devastated, but you would never know that walking through the Hudson Yards development on the west side of Manhattan or in San Francisco’s SoMa neighborhood.

Despite their dizzying performance, the intense concentration of success in these zip codes does not bode well for the wider American economy. Geography and zoning ordinances prevent millions from migrating to these hubs, and for those lucky few who can make a living, high housing prices and other costs can place incredible stress on young families.

If we want to ameliorate inequality while lowering that cost burden, then it is up to cities across the nation to build up their own ecosystems and compete effectively. And those cities cannot just be the megalopolis global cities, but have to include the smaller urban cities that are often the core of regions outside the coast.

I have seen few cities sell themselves as effectively as Chattanooga, whose mayor Andy Berke visited TechCrunch’s offices recently. He was accompanied by Luke Marklin, the CEO of tech-enabled moving startup Bellhops, which has raised $27.2 million in venture capital according to Crunchbase.

The two have teamed up to share the gospel of Chattanooga, but their vision could also be the vision for the future of urban cities in America. The story at this point is well-told: the formerly down-on-its-luck Tennessee city brought its community together in the 1980s and 1990s to revitalize its downtown area. Following the 2008 global financial crisis, the city’s power utility started to revamp its infrastructure, and ultimately decided to build out a gigabit municipal fiber infrastructure for the city of about 175,000.

With that bandwidth in hand, the city has embarked on building a startup hub. It has invested significant resources into its downtown innovation district, and it has worked hard to program events and amenities that will attract a diverse and talented workforce. That concentration is intentional. “We can’t have our assets spread out in a city of our size,” Berke explained. “We need to juice up activity in that ecosystem so that it feels like an ecosystem.”

What makes Chattanooga competitive though, and ultimately an interesting case study of urban leadership, is the mayor’s and the city’s deep understanding of trade-offs. With just over 175,000 people, the city’s population is just slightly higher than the startup-focused Flatiron District in New York. Chattanooga, while a distinctive name, is also not the first city that people think of when they are considering locations to migrate to.

While the city has desirable outdoor and cultural amenities, Berke is realistic about how many people might consider Chattanooga as an adopted home. “We don’t need 10,000 engineers moving to Chattanooga to survive,” he said. “You have to grow more talent, because you don’t want to solely rely on the attraction piece.” He gave the example of connecting businesses with the leadership of the local university to ensure that the skills that graduates were learning lined up better with the needs of industry.

By focusing that training on local residents, startups that might otherwise flee to the Bay Area at the first sign of a VC investment decide to stay put. Berke, speaking about Marklin and Bellhops, said that “He is approaching 100 people and when it gets to 250, that creates tremendous wealth in our community, and most of that wealth gets reinvested in the area as opposed to other types of businesses.”

Chattanooga has become a competitive city. It has used its natural endowments, its institutions, its people, and its community to foster a new generation of its economy. That isn’t a panacea for all social or economic problems of course, but the city now has a base upon which it can continue to improve the quality of life for all of its residents and potentially some transplants as well.

James Fallows wrote something of a paean to local governments this week in his cover piece for The Atlantic. Fallows has spent the past few years investigating an interesting trend in American polling: “Even as national politics induces distrust and despair, most polls show rising faith in local governance.” The reason becomes obvious as he travels around the country. Local governments are shoring up their communities through engaged citizens, smart services, and a focus on bringing everyone together around the future of their homes. In short, they look a bit like Chattanooga.

I am a bit more skeptical than Fallows though of the strength of America’s local governments. While there are certainly success stories, there are also 307 American cities with populations above 100,000. Every single one of them should be focused on increasing their competitiveness using whatever resources — however meager — they have.

The key is that cities don’t become competitive with solutions, they become competitive through systems. Transforming an urban area’s built environment and economy is a process measured in decades, not months. Systems ensures that people and institutions are working together for the long-haul, so that when a mayor leaves office, the progress of a city isn’t suddenly halted.

Chattanooga shows what happens when a city can maintain that multi-decade focus on growth and revitalization, while also adapting itself to the realities of the talent inside its borders and the economy in the world at large. Chattanooga may never be as dense as New York City, but it can certainly find a seat at the economic table and be an attractive place to live. Ultimately, we need hundreds of Chattanoogas, urban cities with dynamic economies that offer affordable alternatives to the most expensive startup hub cities in the country.

28 Apr 2018

Emissary wants to make sales networking obsolete

There is nothing meritocratic about sales. A startup may have the best product, the best vision, and the most compelling presentation, only to discover that their sales team is talking to the wrong decision-maker or not making the right kind of small talk. Unfortunately, that critical information — that network intelligence — isn’t written down in a book somewhere or on an online forum, but generally is uncovered by extensive networking and gossip.

For David Hammer and his team at Emissary, that is a problem to solve. “I am not sure I want a world where the best networkers win,” he explained to me.

Emissary is a hybrid SaaS marketplace which connects sales teams on one side with people (called emissaries, naturally) who can guide them through the sales process at companies they are familiar with. The best emissaries are generally ex-executives and employees who have recently left the target company, and therefore understand the decision-making processes and the politics of the organization. “Our first mission is pretty simple: there should be an Emissary on every deal out there,” Hammer said.

Expert networks, such as GLG, have been around for years, but have traditionally focused on investors willing to shell out huge dollars to understand a company’s strategic thinking. Emissary’s goal is to be much more democratized, targeting a broader range of both decision-makers and customers. It’s product is designed to be intelligent, encouraging customers to ask for help before a sales process falters. The startup has raised $14 million to date according to Crunchbase, with Canaan leading the last series A round.

While Emissary is certainly a creative startup, its the questions spanning knowledge arbitrage, labor markets, and ethics it poses that I think are most interesting.

Sociologists of science generally distinguish between two forms of knowledge, concepts descended from the work of famed scholar Michael Polanyi. The first is explicit knowledge — the stuff you find in books and on TechCrunch. These are facts and figures — a funding round was this size, or the CEO of a company is this individual. The other form is tacit knowledge. The quintessential example is riding a bike — one has to learn by doing it, and no number of physics or mechanics textbooks are going to help a rider avoid falling down.

While org charts may be explicit knowledge, tacit knowledge is the core of all organizations. It’s the politics, the people, the interests, the culture. There is no handbook on these topics, but anyone who has worked in an organization long enough knows exactly the process for getting something done.

That knowledge is critical and rare, and thus ripe for monetization. That was the original inspiration for Hammer when he set out to build a new startup.“Why does Google ever make a bad decision?” Hammer asked at the time. Here you have the company with the most data in the world and the tools to search through it. “How do they not have the information they need?” The answer is that it has all the explicit knowledge in the world, but none of the implicit knowledge required.

That thinking eventually led into sales, where the information asymmetry between a customer and a salesperson was obvious. “The more I talked to sales people, the more I realized that they needed to understand how their account thinks,” Hammer said. Sales automation tools are great, but what message should someone be sending, and to who? That’s a much harder problem to solve, but ultimately the one that will lead to a signed deal. Hammer eventually realized that there were individuals who could arbitrage their valuable knowledge for a price.

That monetization creates a new labor market for these sorts of consultants. For employees at large companies, they can now leave, take a year off or even retire, and potentially get paid to talk about what they know about an organization. Hammer said that “people are fundamentally looking for ways to be helpful,” and while the pay is certainly a major highlight, a lot of people see an opportunity to just get engaged. Clearly that proposition is attractive, since the platform has more than 10,000 emissaries today.

What makes this market more fascinating long-term though is whether this can transition from a part-time, between-jobs gig into something more long-term and professional. Could people specialize in something like “how does Oracle purchase things,” much as how there is an infrastructure of people who support companies working through the government procurement system?

Hammer demurred a bit on this point, noting that “so much of that is being on the other side of those walls.” It’s not any easier for a potential consultant to learn the decision-making outside of a company than it is for a salesperson. Furthermore, the knowledge of an internal company’s processes degrades, albeit at different rates depending on the organization. Some companies experience rapid change and turnover, while knowledge of other companies may last a decade or more.

All that said, Hammer believes that there will come a tipping point when companies start to recommend emissaries to help salespeople through their own processes. Some companies who are self-aware and acknowledge their convoluted procurement procedures may eventually want salespeople to be advised by people who can smooth the process for all sides.

Obviously, with money and knowledge trading hands, there are significant concerns about ethics. “Ethics have to be at the center of what we do,” Hammer said. “They are not sharing deep confidential information, they’re sharing knowledge about the culture of the organization.” Emissary has put in place procedures to monitor ethics compliance. “Emissaries can not work with competitors at the same time,” he said. Furthermore, emissaries obviously have to have left their companies, so they can’t influence the buying decision itself.

Networking has been the millstone of every salesperson. It’s time consuming, and there is little data on what calls or coffees might improve a sale or not. If you take Emissary’s vision to its asymptote though, all that could potentially be replaced. Under the guidance of people in the know, the fits and starts of sales could be transformed into a smooth process with the right talking points at just the right time. Maybe the best products could win after all.

28 Apr 2018

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy -hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’; thanks, in no small part, to the existential threat posed by dark ads which, up until very recently, Facebook wasn’t saying anything about at all. Except to claim it was “crazy” to think they might have any influence.

And now, despite major scandals and political pressure, Facebook is still showing zero appetite to “fix” its platform — because the issues being thrown into sharp relief are actually there by design; this is how Facebook’s business functions.

“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory,” wrote Zuckerberg in January, underlining how much easier it is to break stuff than put things back together — or even just make a convincing show of fiddling with sticking plaster.

28 Apr 2018

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy -hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’; thanks, in no small part, to the existential threat posed by dark ads which, up until very recently, Facebook wasn’t saying anything about at all. Except to claim it was “crazy” to think they might have any influence.

And now, despite major scandals and political pressure, Facebook is still showing zero appetite to “fix” its platform — because the issues being thrown into sharp relief are actually there by design; this is how Facebook’s business functions.

“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory,” wrote Zuckerberg in January, underlining how much easier it is to break stuff than put things back together — or even just make a convincing show of fiddling with sticking plaster.

28 Apr 2018

This year’s Tribeca Film Festival uses AR and VR to explore music-making and empathy

Visiting the Immersive arcade at the Tribeca Film Festival is always challenging. Every year, there are way more virtual reality and augmented reality experiences to try out (not to mention creators to interview) than I can squeeze into just a couple of hours.

This year, as always, I was only able to check out a handful of projects. They ranged from the serious and political to the playful and colorful — though even the playful projects were still exploring some ideas about creativity and human connection.

Terminal 3, for example, uses augmented reality to put the viewer in the position of an interrogator with airport security: You meet and interview a Muslim traveler, and you get to choose from different questions before ultimately deciding whether or not they should be allowed into the country.

Artist Asad J. Malik told me that as someone who grew in Pakistan, “I’m an expert on [airport] screenings, because I get screened a lot.” For Terminal 3, Malik interviewed real people (one of the options is an interview with Malik himself), though the person you see in front of you doesn’t appear photorealistic. Instead, they’re almost like a digital ghost who might gradually become more lifelike, depending on the questions you ask.

Malik said that he’s not trying to promote a specific political message about Muslims, except to illustrate the enormous variety of personalities, backgrounds and viewpoints among people who may or may not identify themselves as Muslims, but “who the world would identify as Muslims.”

Terminal 3 was created with support from Unity for Humanity and RYOT (a virtual reality-focused studio that, like TechCrunch, is part of Verizon subsidiary Oath). It’s built for Microsoft Hololens — not exactly the most popular platform at Tribeca, but Malik said it was crucial to his approach, because it allows the interview to take place against the background of the real room: “Suddenly this story, this person, it’s in your real space.”

Meanwhile, Lambchild Superstar: Making Music in the Menagerie of the Holy Cow makes no attempt to replicate a real environment. Instead, it takes place in a virtual world of dazzlingly bright colors, populated by animals who can be manipulated to make music — for example, a cow whose tail you can grab and reposition to change the sound made by his farts.

Lambchild Superstar is a collaboration between filmmaker Chris Milk and the band OK Go. OK Go’s Damian Kulash said they initially started out with the question, “What is an OK Go video in VR?” before deciding that was the wrong approach.

Something like the “Upside Down & Inside Out” video (which shows the band flying weightlessly) might seem like a good candidate for 360-degree video, but Kulash said it actually turns out to be “not really about the environment.” Instead, it’s presenting you with an experience in “a very controlled rectangle.”

Lambchild Superstar

So Kulash and Milk decided to explore a different direction, namely allowing users to make create their own music.

“I got into my ridiculous rant about the kind of alchemy of music,” Kulash recalled. “You add one sound to another sound. and you come out the other side with this ball of joy and emotion. It’s just crazy: Where did that thing come from?”

But Milk noted that if you give most people a guitar or a piano, they might get intimidated, because they don’t know how to play it: “There’s a barrier there.” Hence the funny environment and animals; it feels more like playing a game than performing music, but you emerge at the end with a unique song.

And it’s a song that you’ve created with another user, which Kulash said was also a key part of the experience.

“Chris is a zealot about that, and for good reason,” he said. “VR can be an extremely isolating technology … but is there a way we can use that, rather than to isolate, to let you have the closeness of a more human experience? It’s a weird thing that we had to remove all the human iconography to do that.”

This year’s Tribeca Immersive is also unusual for being the first to include a couple of games, like Star Child, a platform adventure game from Playful Corp. Playful’s Paul Bettner said that like the company’s previous game Super Lucky’s Tale, Star Child uses 3D and virtual reality to try to breathe new life into a classic gaming genre, namely the platformers like Abe’s Oddysee.

Today is the final day of the Tribeca Immersive, so New Yorkers have one last chance to experience all these projects. But while you might have a hard time finding some of these projects outside a festival environment, Bettner intends to release Star Child as a mobile game as well. It might sound really tough to squeeze a VR experience onto a smaller screen, but apparently for Bettner’s team, it’s not.

“What I’m finding in VR is if we build the content a certain way, with a focus on doing third person VR, and we focus the entire project on just making it stand out and take advantage of what VR can do, then bringing it to what we call flatscreen platform is a much easier transition than the other way around,” he said.

28 Apr 2018

Microsoft attempts to spin its role in counterfeiting case

Earlier this week Eric Lundgren was sentenced to 15 months in prison for selling what Microsoft claimed was “counterfeit software,” but which was in fact only recovery CDs loaded with data anyone can download for free. The company has now put up a blog post setting “the facts” straight, though it’s something of a limited set of those facts.

“We are sharing this information now and responding publicly because we believe both Microsoft’s role in the case and the facts themselves are being misrepresented,” the company wrote. But it carefully avoids the deliberate misconception about software that it promulgated in court.

That misconception, which vastly overstated Lundgren’s crime and led to the sentence he received, is simply to conflate software with a license to operate that software. Without going into details (my original post spells it out at length) it maintained in court that the discs Lundgren was attempting to sell were equivalent to entire licensed operating systems, when they were simply recovery discs that any user, refurbisher, or manufacturer can download and burn for free. Lundgren was going to sell them to repair shops for a quarter each so they could hand them out to people who needed them.

Hardly anyone even makes these discs any more, certainly not Microsoft, and they’re pretty much worthless without a licensed copy of the OS in the first place. But Microsoft convinced the judges that a piece of software with no license or product key — meaning it won’t work properly, if at all — is worth the same as one with a license.

Lundgren had already pleaded guilty to infringing Dell’s trademark by copying the look of its discs, but the value Microsoft convinced the judges those discs have (a total of $700,000) directly led to his 15-month sentence.

Anyway, the company isn’t happy with the look it has of sending a guy to prison for stealing something with no value to anyone but someone with a bum computer and no backup. It summarizes what it thinks are the most important points as follows, with my commentary following the bullets.

Microsoft did not bring this case: U.S. Customs referred the case to federal prosecutors after intercepting shipments of counterfeit software imported from China by Mr. Lundgren.

This is perfectly true, however Microsoft has continually misrepresented the nature and value of the discs, falsely claiming that they led to lost sales. That’s not possible, of course, since Microsoft gives the contents of these discs away for free. It sells licenses to operate Windows, something you’d have to have already if you wanted to use the discs in the first place.

Lundgren established an elaborate counterfeit supply chain in China: Mr. Lundgren traveled extensively in China to set up a production line and designed counterfeit molds for Microsoft software in order to unlawfully manufacture counterfeit discs in significant volumes.

Microsoft is trying to make it sound like the guy is some criminal mastermind running some big time Windows pirating empire. He literally gave a Dell recovery disc to a duplication shop and told them to make exact copies of it, including the label and paper sleeve.

Lundgren failed to stop after being warned: Mr. Lundgren was even warned by a customs seizure notice that his conduct was illegal and given the opportunity to stop before he was prosecuted.

I can’t speak to this one, but Lundgren told me that the first notice he had that this was being pursued by anyone was when they raided his house. The monetary value of the discs was so small and the counterfeiting piece so minor (fake labels for duplicates of discs that Dell doesn’t even provide any more) that if anything it would be a fine and confiscation of the shipment, not a 5-year case alleging millions in damages.

Lundgren pleaded guilty: The counterfeit discs obtained by Mr. Lundgren were sold to refurbishers in the United States for his personal profit and Mr. Lundgren and his codefendant both pleaded guilty to federal felony crimes.

Lundgren pleaded guilty to counterfeiting the Dell discs, not to counterfeiting Microsoft software. It’s an important distinction because the discs are nearly worthless and copyright crimes are sentenced based on the value of the infringed item. I’ve asked him about the claim that he sold 8,000 of them to some buyer for $28,000, or $3.50 each — something that would make so sense, since any buyer would know these things can be made for pennies.

Lundgren went to great lengths to mislead people: His own emails submitted as evidence in the case show the lengths to which Mr. Lundgren went in an attempt to make his counterfeit software look like genuine software. They also show him directing his co-defendant to find less discerning customers who would be more easily deceived if people objected to the counterfeits.

Printing an accurate copy of a label for a disc isn’t exactly “great lengths.” Early on the company in China printed “Made in USA” on the disc and “Made in Canada” on the sleeve, and had a yellow background when it should have been green — that’s the kind of thing he was fixing.

Lundgren intended to profit from his actions: His own emails submitted as evidence before the court make clear that Mr. Lundgren’s motivation was to sell counterfeit software to generate income for himself.

The plan was to sell these nearly worthless discs —remember, anybody can make one for free — for a quarter each to refurbishers.

Microsoft has a strong program to support legitimate refurbishers and recyclers: Our program supports hundreds of legitimate recyclers, while protecting customers.

The implication is that that Lundgren is not a legitimate refurbisher or recycler. He pointed out earlier, however, that his company, which handles recycling for Lenovo, Nintendo, and others, takes care of more e-waste in a year than Microsoft has in a decade.

When a refurbisher installs a fresh version of Windows on a refurbished PC, we charge a discounted rate of $25 for the software and a new license – it is not free.

But if they’re not installing a fresh version of Windows, because the machine already has a licensed copy on it, as so many do, the software is free. There’s no limit on how many a company can make on its own; Microsoft only charges for the licenses. Here, go make one yourself in case you need to do it.

Mr. Lundgren’s scheme was simple. He was counterfeiting Windows software in China and importing it to the United States. Mr. Lundgren intended the software to be sold to the refurbisher community as if it was a legitimate, licensed copy of Windows.

There’s the key right there. “As if it was a legitimate, licensed copy of Windows.”

These are not licensed copies of Windows! They’re discs anyone can make, and that manufacturers and refurbishers can print as many of as they like, to give to customers who already have a copy of Windows. These discs are for repairing or re-installing a copy of the OS. They did not come with licenses and Lundgren was not selling or providing licenses.

Don’t let Microsoft fool you the way they helped fool the judges. A recovery disc is something you or I or a refurbisher can make right now for free. A license to operate Windows comes from Microsoft and costs good money. They’re not the same thing and Lundgren was going to sell the former, not the latter.

I’ve asked Microsoft to explain this last point and will update the post if I hear back.