Covid 19

3 new technologies ecommerce brands can use to connect better with customers

Ecommerce was already a fast-growing industry at the beginning of 2020. Now it’s experiencing an unprecedented boom as billions of shoppers seek to replace their physical shopping carts with virtual ones.

What’s more, customer loyalty has been uprooted and is now up for grabs. A study by McKinsey & Company found that consumer behaviors have changed drastically across the globe with extremely high numbers of consumers having tried new shopping behaviors, including purchasing products from new brands, in the past few months. 

These changes are creating new opportunities but also increased competition. 

As a result, companies have been investing in new tech, from AR-generated apps being used to allow customers to ‘try on’ make-up and clothes virtually to gamified shopping promotions. 

But, in the rush to adopt the latest trends and attract new customers, many companies are feeling more out of touch with their audience than ever. 

We spoke with three ecommerce experts to find out what companies are getting wrong and how they can better connect with their audiences using technology. As part of’s most recent batch of Rise Programme participants, these fast-growing scaleups represent the best of the best in Dutch innovation. Here’s what they had to say: 

Go where your customers are

ChannelEngine logo and CEO Jorrit Steinz

When choosing a spot for a brick-and-mortar store, everyone knows the most important consideration is location, location, location. You want to set up your store where your customers like to hang out and shop regularly. According to Jorrit Steinz, CEO of ChannelEngine, your ecommerce strategy should be no different. 

And just where is your audience shopping online? According to a study by Digital Commerce 360, sales on marketplace sites accounted for 62% of global web sales in 2020, with the top online marketplaces in the world selling $2.67 trillion in products. 

“While consumers were first searching on a search engine, now they’re searching on marketplaces. Even if they’re searching on Google, they will still find marketplaces so it’s essential for brands to be where consumers are searching,” Steinz said.

Even if consumers do start with a Google search, individual retailers still have to compete with marketplaces for top spots in search results. 

Most new webshops completely rely on Google driving traffic. Then you see the marketplaces competing for the same set of keywords. On top of that, Google itself is competing with Google Shopping. So it’s getting harder and harder to optimize for your own webshop. There’s a whole ecosystem of brands that are only selling on marketplaces, social media, and not even on their own webstore.

ChannelEngine is a software as a service platform that connects brands, retailers, and wholesalers to online marketplaces. Instead of having to manage an Amazon account, eBay listings, and a Zalando portal, companies can manage multiple marketplaces across the globe from this one platform. This means stock levels and orders can be synchronized, product updates can be made automatically, and price levels can be controlled in one place. 

For brands looking to break into new markets, rather than spending time on translating websites, researching keywords, and creating specialized campaigns, the transition can be as simple as selecting the marketplace with the best reach in that country. 

As Steinz pointed out, it’s not just about traditional marketplaces. Social media channels are also now transitioning towards becoming virtual shopping malls.

A lot of click channels, like Instagram, Google, and comparison sites, are all turning into transactional channels, which is basically a marketplace. So that means there’s going to be more and more entry points for potential customers.

Instead of navigating to an online shop, consumers will now have their credit cards linked to their Instagram accounts, allowing them to simply click on an ad and buy directly in the app.

“That’s going to be a massive shift for any ecommerce retailer and, if they’re not prepared, it’s going to cost them some potential revenue,” Steinz predicted.

You get the best customer insights by simply listening 

Wonderflow logo and CEO Riccardo Osti

“We’re always talking about digital data sources now online. The tendency is to think that ecommerce is something and then traditional retail is something else. This is absolutely not true,” said Riccardo Osti, CEO of Wonderflow. 

BazaarVoice found that 56% of online shoppers and 45% of brick and mortar buyers read reviews online before purchasing a product. This has created a multiplier effect for some product categories, meaning that each dollar a company makes online is equal to between four and six dollars they make offline. 

“Whatever happens online has an impact on the real world. When I buy something offline, I first read reviews online. Then I go to the shop already knowing which products I want to see and buy,” Osti said.

The more companies realize this and begin to combine online and offline data to inform their strategy as a whole, the better. 

I think a very big mistake is that most companies don’t try to connect with their audience. Historically many brands, especially ones that have a very technical product offering, focus a lot on their product and not on their customers. But times have changed.

Customers are more than willing to share their opinion and connect with brands in the form of online reviews, NPS scores, and customer center feedback. This means there’s already a plethora of customer data at companies’ fingertips. The problem is, many simply don’t know how to translate this data into usable information. 

Wonderflow is a Voice of the Customer (VoC) analytics solution that allows companies to glean insights from different customer feedback sources. Their platform leverages natural language processing to aggregate and analyze all of this feedback (both public and private) in one place.  

The next, and more difficult step, is to translate this information into actionable advice and that’s where Wonderflow’s strength lies. Their predictive technology is able to take current consumer insights, and use them to create actionable predictions for the future. Osti explained:

At Wonderflow we’re now trying to predict what your future appreciation score or new star rating of a specific product is going to be in one month or in one year. 

We start by analyzing what customers say about the product and we identify where there’s space for improvement. So, for example, if the star rating is 3.8 out of five, we can tell you ‘if you want to get a 4.5-star rating in the future, you need to improve features x and y.’ 

The second step we’re working on is the prescriptive part. This allows us to tell you which action you should take to make that improvement happen. For example, ‘run an engineering workshop to identify what the problem is with this specific component of the product.’

Perhaps one of the most exciting things about this new technology is that, by providing narrative text-based prescriptions, absolutely anybody in your company will be able to glean insights from them, not just data analysts. 

“This is the big change that we will see in the industry for the next few years, moving from the old fashioned, unreadable business intelligence platforms that we’ve seen for decades, to intuitive charts and narratives,” Osti told TNW. 

Embrace niche audiences

SocialDatabase logo and CEO Thomas Slabbers

Thomas Slabbers, CEO of SocialDatabase, believes that the biggest mistake companies make when it comes to connecting with their audiences is not spending enough time defining who those audiences are.

At SocialDatabase, we believe in the following formula: RESULT = CONTENT X DATA. Brands spend a lot of time creating the right content, but when it comes to creating the right audience, they often fall short. With just native targeting options available and limited access to data, brands struggle with reaching the right audience. We believe that enriched public data should be the starting point of every campaign.

SocialDatabase created a unique solution for this.

By amplifying publicly available Twitter data, we’ve created SUPERAUDIENCES. SUPERAUDIENCES allow brands to selectively target more relevant audiences through a deeper analysis of public data. These are custom audiences designed to match campaign goals, increasing receptivity and media effectiveness, without using third-party data.

But do we really want to narrow our audience? Isn’t casting a wider net better?

“First of all, the majority of social media users feel the communication coming from brands is irrelevant or unimportant to them. A more narrow audience would make ads more interesting and relevant. Secondly, reducing the waste in a target audience simply saves a lot of budget that would have been spent on the wrong audience. Finally, a more focused audience enables brands to make more impact in a shorter amount of time,” Slabbers explained.

SUPERAUDIENCES are particularly relevant for use cases where quality is more important than scale, whether you’re looking for a niche, B2B, or relevant consumer audience.

As a Formula 1 partner, Heineken used SUPERAUDIENCES to distinguish hardcore F1 fans from casual fans during the Grand Prix of Australia, China, and Spain. Meanwhile, Nutricia, a company that specializes in therapeutic food and clinical nutrition, is using SUPERAUDIENCES to specifically reach healthcare professionals.

There you have it, location, listening, and spending more time in defining your audience will help you build a stronger connection with them. Although brick and mortar stores are starting to open up again in some countries, the continued rise and preference for ecommerce is not something that’s going away. But, as Osti explained, combining your retail and ecommerce strategies is the best way to get ahead of the game.

Covid 19

How a theoretical mouse could crack the stock market

A team of physicists at Emory University recently published research indicating they’d successfully managed to reduce a mouse’s brain activity to a simple predictive model. This could be a breakthrough for artificial neural networks. You know: robot brains.

Let there be mice: Scientists can do miraculous things with mice such as grow a human ear on one’s back or control one via computer mouse. But this is the first time we’ve heard of researchers using machine learning techniques to grow a theoretical mouse brain.

Per a press release from Emory University:

The dynamics of the neural activity of a mouse brain behave in a peculiar, unexpected way that can be theoretically modeled without any fine tuning.

In other words: We can observe a mouse’s brain activity in real-time, but there are simply too many neuronal interactions for us to measure and quantify each and every one – even with AI. So the scientists are using the equivalent of a math trick to make things simpler.

How’s it work? The research is based on a theory of criticality in neural networks. Basically, all the neurons in your brain exist in an equilibrium between chaos and order. They don’t all do the same thing, but they also aren’t bouncing around randomly.

The researchers believe the brain operates in this balance in much the same way other state-transitioning systems do. Water, for example, can change from gas to liquid to solid. And, at some point during each transition, it achieves a criticality where its molecules are in either both states or neither.

[Read: The biggest tech trends of 2021, according to 3 founders]

The researchers hypothesized that brains, organic neural networks, function under the same hypothetical balance state. So, they ran a bunch of tests on mice as they navigated mazes in order to establish a database of brain data.

Next, the team went to work developing a working, simplified model that could predict neuron interactions using the experimental data as a target. According to their research paper, their model is accurate to within a few percentage points.

What’s it mean? This is early work but, there’s a reason why scientists use mice brains for this kind of research: because they’re not so different from us. If you can reduce what goes on in a mouse’s head to a working AI model, then it’s likely you can eventually scale that to human-brain levels.

On the conservative side of things, this could lead to much more robust deep learning solutions. Our current neural networks are a pale attempt to imitate what nature does with ease. But the Emory team’s mouse models could represent a turning point in robustness, especially in areas where a model is likely to be affected by outside factors.

This could, potentially, include stronger AI inferences where diversity is concerned and increased resilience against bias. And other predictive systems could benefit as well, such as stock market prediction algorithms and financial tracking models. It’s possible this could even increase our ability to predict weather patterns over long periods of time.

Quick take: This is brilliant, but it’s actual usefulness remains to be seen. Ironically, the tech and AI industries are also at a weird, unpredictable point of criticality where brute-force hardware solutions and elegant software shortcuts are starting to pull away from each other.

Still, if we take a highly optimistic view, this could also be the start of something amazing such as artificial general intelligence (AGI) – machines that actually think. No matter how we arrive at AGI, it’s likely we’ll need to begin with models capable of imitating nature’s organic neural nets as closely as possible. You got to start somewhere.

Covid 19

Billionaire clown Elon Musk drags the late Chris Farley into Tesla’s feud with Ford

There’s never a dull moment in tech. Elon Musk and Ford CEO Jim Farley got into it on Twitter yesterday after a new Ford advertisement seemingly tossed shade at Tesla’s Autopilot.

Heads up: The real lead here is that Ford’s new BlueCruise kit, a driverless car system, will launch on certain Mustang and F-150 models. Can we all take a moment to recognize how awesome the idea of an autonomous Mustang in the future is?

But: Elon being Elon, there was no way the news was ever going to be about anything other than him.

Ford CEO Jim Farley apparently couldn’t resist trolling Tesla a bit when he tweeted that his company tested BlueCruise “in the real world, so our customers don’t have to.” This has been interpreted to be a jab at Tesla’s simulation-based training methods.

Musk responded (to a tweet featuring the quote) by invoking Farley’s cousin, the late Chris Farley. Yes, that Chris Farley:

Many on Twitter found the reply innocuous and good-natured, others saw it as over-the-top and disrespectful. It’s generally considered impolite to use a clip of someone’s deceased relative to troll them on social media.

Here’s the thing: It’s macabre for Musk and Farley to joke about training driverless cars. Autopilot failures have been a contributing factor in numerous accidents involving Tesla vehicles, some of which were fatal.

There is currently no production vehicle on the market that can drive itself safely and/or legally. We’ve seen the videos and the fact remains: level two autonomy is not self-driving.

Tesla’s “Autopilot” and “Full Self Driving” systems are not capable of auto-piloting or self-driving. Full stop.

[Read: The biggest tech trends of 2021, according to 3 founders]

This kind of rhetoric, two childish CEOs bantering about the abilities of their vehicles, gives consumers a bloated view of what these cars are capable of. Whether consumers think Ford’s built something that’s better than “Autopilot,” or that Tesla already has things figured out – it seems the reality of level two autonomy is getting lost in the hype.

The bottom line: The technology powering these vehicles is amazing but, at the end of the day it’s just glorified cruise control. Drivers are meant to keep their hands on the wheel and their eyes on the road at all times when operating any current production vehicle, whether its so-called self-driving features are engaged or not.

When these companies and their CEOs engage in back-and-forth on Twitter, they’re taking a calculated risk that consumers will buy into the rivalry and enjoy the capitalist competition as it plays out for their amusement.

They take the same kind of calculated risk when they continue marketing their products as “self-driving” features even after customers keep overestimating their abilities and dying.

Covid 19

Study: People trust the algorithm more than each other

Our daily lives are run by algorithms. Whether we’re shopping online, deciding what to watch, booking a flight, or just trying to get across town, artificial intelligence is involved. It’s safe to say we rely on algorithms, but do we actually trust them?

Up front: Yes. We do. A trio of researchers from the University of Georgia recently conducted a study to determine whether humans are more likely to trust an answer they believe was generated by an algorithm or crowd-sourced from humans.

The results indicated that humans were more likely to trust algorithms when problems become to complex for them to trust their own answers.

Background: We all know that, to some degree or another, we’re beholden to the algorithm. We tend to trust that Spotify and Netflix know how to entertain us. So it’s not surprising that humans would choose answers based on the sole distinction that they’ve been labeled as being computer-generated.

But the interesting part isn’t that we trust machines, it’s that we trust them when we probably shouldn’t.

How it works: The researchers tapped 1,500 participants for the study. Participants were asked to look at a series of images and determine how many people were in each image. As the number of people in the image increased, humans gained less confidence in their answers and were offered the ability to align their responses with either crowd-sourced answers from a group of thousands of people, or answers they were told had been generated by an algorithm.

Per the study:

In three preregistered online experiments, we found that people rely more on algorithmic advice relative to social influence as tasks become more difficult. All three experiments focused on an intellective task with a correct answer and found that subjects relied more on algorithmic advice as difficulty increased. This effect persisted even after controlling for the quality of the advice, the numeracy and accuracy of the subjects, and whether subjects were exposed to only one source of advice, or both sources.

The problem here is that AI isn’t very well suited for a task such as counting the number of humans in an image. It may sound like a problem built for a computer – it’s math-based, after all – but the fact of the matter is that AI often struggles to identify objects in images especially when there aren’t clear lines of separation between objects of the same type.

Quick take: The research indicates the general public is probably a little confused about what AI can do. Algorithms are getting stronger and AI has become an important facet of our everyday lives, but it’s never a good sign when the average person seems to believe a given answer is better just because they think it was generated by an algorithm.

Covid 19

EU commission to take hard-line stance against ‘high-risk’ AI

The European Commission is set to unveil a new set of regulations for artificial intelligence products. While some AI tech would be outright banned, other potentially harmful systems would be forced through a vetting process before developers could release them to the general public.

The proposed legislation, per a leak obtained by Politico’s Melissa Heikkila, would ban systems it deems as “contravening the Union values or violating fundamental rights.”

The regulations, if passed, could limit the potential harm done by AI-powered systems involved in “high-risk” areas of operation such as facial recognition, and social credit systems.

Per an EU statement:

This proposal will aim to safeguard fundamental EU values and rights and user safety by obliging high-risk AI systems to meet mandatory requirements related to their trustworthiness. For example, ensuring there is human oversight, and clear information on the capabilities and limitations of AI.

The commission’s anticipated legislation comes after years of research internally and with third-party groups, including a 2019 white paper detailing the EU’s ethical guidelines for responsible AI.

It’s unclear at this time exactly when such legislation would pass, the EU’s only given it a “2021” time frame.

Also unclear: exactly what this will mean for European artificial intelligence startups and research teams. It’ll be interesting to see exactly how development bans will play out, especially considering no such regulation exists in the US, China, or Russia.

The regulation is clearly aimed at big tech companies and medium-sized AI startups that specialize in controversial AI tech such as facial recognition. But, even with the leaked proposal, there’s still little in the way of information as to how the EU plans to enforce these regulations or exactly how systems will be vetted.

Covid 19

Intel’s new AI helps you get just the right amount of hate speech in your game chat

The Intel microprocessor company was founded in 1968. It’s bushwhacked a trail of technology and innovation in the decades since to become one of the leading manufacturers of computer chips worldwide.

But never mind all that. Because we live in a world where Kodak is a failed cryptocurrency company that’s now dealing drugs and everyone still thinks Elon Musk invented the tunnel.

Which means that here in this, the darkest timeline, we’re stuck with the version of Intel that uses AI to power “White nationalism” sliders and “N-word” toggles for video game chat.

Behold ‘Bleep,’ in all its stupid glory:

What you’re seeing is the UI for Bleep. An AI-powered software solution featuring a series of sliders and toggles that allow you to determine how much hate speech, in a given category, you want to hear when you’re chatting with people in multiplayer games.

If you’re wondering what or who this is for: join the crowd. This feels like the kind of solution you get when you apply the “there are no bad ideas” and “failure is not an option” philosophies in equal parts to a problem you have no business addressing in the first place.

To be clear, I’m saying: even if it worked perfectly, Bleep is just a Rube Goldberg machine that replaces your mute button. Censoring the potty words doesn’t help anyone when the context and their absence make the speaker’s intention clear anyway.

Hate speech isn’t a magic invocation we must treat like “He Whose Name We Do Not Say.” It’s a problem that needs to be addressed at social levels far beyond anything Intel can solve with deep learning. And, furthermore, I don’t think it’s going to work.

I think Intel’s AI division is incredible and they do amazing work. But I don’t think you can solve or even address hate speech with natural language processing. In fact, I believe it’s sadly ironic that anyone would try.

AI is biased against any accent that doesn’t sound distinctly white and American or British. Couple that fact with these tidbits: humans struggle to identify hate speech in real-time, hate speech evolves at the speed of memes, and there should be no acceptable level of hate speech permitted (thus tacitly endorsed) by a company through an interactive software interface.

The time Intel spent developing and training an AI to determine how much hate speech directed at various minority groups crossed the line between “none,” “some,” “most,” and “all,” or teaching it to detect the “N-word and all its variants,” could have been spent doing something more constructive.

Bleep, as a solution, is an insult to everyone’s intelligence.

And we’re never going to forget it. Perhaps Intel’s never heard of this thing called social media where, from now until the end of time, we’ll see images from the Bleep interface used to de-nuance the discourse on racial injustice.

So, thanks for that. At least the UI looks good. 

Published April 8, 2021 — 20:08 UTC

Covid 19

Robo-taxi company Waymo now has two CEOs

Take me to your leader(s).

Waymo CEO John Krafcik today stepped down from his leadership position at the Google sister company. According to an email posted to the company’s blog, he’ll maintain an advisory role. Replacing him will be not one, but two Waymo executives: former COO Tekedra Mawakana and former CTO Dmtri Dolgov.

The timing of Krafcik’s departure is notable in that the company’s entering a new phase of development. Waymo was reportedly planning to expand its budding robo-taxi service before the pandemic struck. But, now that there’s a vaccine-shaped light at the end of the tunnel, we can expect the Waymo One robotaxi service and Waymo Via, an autonomous delivery service, to get a big push.

What’s most interesting here is the company’s decision to go with co-CEOs instead of choosing a single candidate. So far, the only hint we have as to the “why” of it all comes from Krafcik’s email:

Waymo’s new co-CEOs bring complementary skill sets and experiences – most recently as COO and CTO respectively – and have already been working together in close partnership for years in top executive positions at Waymo. Dmitri and Tekedra have my full confidence and support, and of course, the full confidence of Waymo’s board and Alphabet leadership.

The two CEO strategy isn’t new here. Last year Netflix elevated its head of content strategy, Ted Sarandos, to the co-CEO position alongside longtime-CEO Reed Hastings. And, so far, it seems like things are working out for the big N. But, on the other hand, Salesforce tried the same thing with Keith Block and Marc Benioff and it wasn’t the success the company hoped it would be.

Waymo’s business model works a bit more like Netflix than Salesforce though. Where Salesforce is a massive company that does a bunch of different things under the umbrella of “cloud-based software services,” Netflix can split itself into two distinct models: making stuff and streaming stuff.

And, when it comes to Waymo, we see a two-pronged approach where the company’s focus is split between B2B offerings such as fleets and turn-key autonomous delivery solutions and consumer-facing endeavors such as robo-taxis and B2C last-mile delivery services.

It’ll be fascinating to see how it all  plays out over the next few years. And, with any luck, we’ll all be reading about the new leadership team’s many successes from the safety and comfort of a fully-autonomous vehicle in the near future.

Published April 2, 2021 — 18:13 UTC

Covid 19

What if you’re living in a simulation, but there’s no computer?

Swedish Philosopher Nick Bostrom’s simulation argument says we might be living in a computer-generated reality. Maybe he’s right. There currently exists no known method by which we could investigate the parameters of our “programming,” so it’s up to each of us to decide whether to believe in The Matrix or not.

Perhaps it’s a bit more nuanced than that though. Maybe he’s only half-wrong – or half-right, depending on your philosophical view.

What if we are living in a simulation, but there’s no computer (in the traditional sense) running it?

Here’s the wackiest, most improbable theory I could cobble together from the weirdest papers I’ve ever covered. I call it: “Simulation Argument: Live and Unplugged.”


Bostrom’s hypothesis is actually quite complicated:

But it can be explained rather easily. According to him, one or more of the following statements must be true:

  • The human species is very likely to go extinct before reaching a “posthuman” stage
  • Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)
  • We are almost certainly living in a computer simulation

Bostrom’s basically saying that humans in the future will probably run ancestry simulations on their fancy futuristic computers. Unless they can’t, don’t want to, or humanity gets snuffed out before they get the chance.


As many people have pointed out, there’s no way to do the science when it comes to simulation hypothesis. Just like there’s no way for the ants in an ant colony to understand why you’ve put them there, or what’s going on beyond the glass, you and I can’t slip the void to have a chat with the programmers responsible for coding us. We’re constrained by physical rules, whether we understand them or not.

Quantum Physics!

Except, of course, in quantum mechanics. There, all the classical physics rules we spent millennia coming up with make almost no sense. In the reality you and I see every day, for example, an object can’t be in two places at the same time. But the heart of quantum mechanics involves this very principal.

The universe at large appears to obey a different set of rules than the ones that directly apply to you and I in our everyday existence.


Scientists like to describe the universe in terms of rules because, from where we’re sitting, we’re basically looking at infinity from the perspective of an amoeba. There’s no ground-truth for us to compare notes against when we, for example, try to figure out how gravity works in and around a black hole. We use rules such as mathematics and the scientific method to determine what’s really real.

So why are the rules different for people and stars than they are for singularities and wormholes? Or, perhaps more correctly: if the rules are the same for everything, why are they applied in different measures across different systems?

Wormholes, for example, could, in theory, allow objects to take shortcuts through physical spaces. And who knows what’s actually on the other side of a black hole?

But you and I are stuck here with boring old gravity, only able to be in a single place at a time. Or are we?

Organic neural networks!

Humans, as a system, are actually incredibly connected. Not only are we tuned in somewhat to the machinations of our environment, but we can spread information about it across vast distances at incredible speeds. For example, no matter where you live, it’s possible for you to know the weather in New York, Paris, and on Mars in real-time.

What’s important there isn’t how technologically advanced the smart phone or today’s modern computers have become, but that we continue to find ways to increase and evolve our ability to share knowledge and information. We’re not on Mars, but we know what’s going on almost as if we were.

And, what’s even more impressive, we can transfer that information across iterations. A child born today doesn’t have to discover how to make fire and then spend their entire life developing the combustion engine. It’s already been done. They can look forward and develop something new. Elon Musk’s already made a pretty good electric engine, so maybe our kids will figure out a fusion engine or something even better.

In AI terms, we’re essentially training new models based on the output from old models. And that makes humanity itself a neural network. Each generation of human adds selected information from the previous generation’s output to their input cycle and then, stack by stack, develop new methods and novel inferences.

The Multiverse!

Where it all comes together is in the wackiest idea of all: our universe is a neural network. And, because I’m writing this on a Friday, I’ll even raise the stakes and say our universe is one of many universes that, together, make up a grand neural network.

That’s a lot to unpack, but the gist involves starting with quantum mechanics and maintaining our assumptions as we zoom out beyond what we can observe.

We know that subatomic particles, in what we call the quantum realm, react differently when observed. That’s a feature of the universe that seems incredibly significant for anything that might be considered an observer.

If you imagine all subatomic systems as neural networks, with observation being the sole catalyst for execution, you get an incredibly complex computation mechanism that’s, theoretically, infinitely scalable.

Good Vibes Drugs GIF by Feliks Tomasz Konczakowski - Find & Share on GIPHY

Rather than assume, as we zoom out, that every system is an individual neural network, it makes more sense to imagine each system as a layer inside of a larger network.

And, once you reach the biggest self-contained system we can imagine, the whole universe, you arrive at a single necessary conclusion: if the universe is a neural network, its output must go somewhere.

That’s where the multiverse comes in. We like to think of ourselves as “characters” in a computer simulation when we contemplate Bostrom’s theory. But what if we’re more like cameras? And not physical cameras like the one on your phone, but more like the term “camera” as it applies to when a developer sets a POV for players in a video game.

Image: Magic Leap

If our job is to observe, it’s unlikely we’re the entities the universe-as-a-neural-network outputs to. It stands to reason that we’d be more likely to be considered tools or necessary byproducts in the grand scheme.

However, if we imagine our universe as simply another layer in an exponentially bigger neural network, it answers all the questions that derive from trying to shoehorn simulation theory into being a plausible explanation for our existence.

Most importantly: a naturally occurring, self-feeding, neural network doesn’t require a computer at all. 

In fact, neural networks almost never involve what we usually think of as computers. Artificial neural networks have only been around for a matter of decades, but organic neural networks, AKA brains, have been around for at least millions of years.

Wrap up this nonsense!

In conclusion, I think we can all agree that the most obvious answer to the question of life, the universe, and everything is the wackiest one. And, if you like wacky, you’ll love my theory. 

Here it is: our universe is part of a naturally-occurring neural network spread across infinite or near-infinite universes. Each universe in this multiverse is a single layer designed to sift through data and produce a specific output. Within each of these layers are infinite or near-infinite systems that comprise networks within the network.

Information travels between the multiverse’s layers through natural mechanisms. Perhaps wormholes are where data is received from other universes and black holes are where its sent for output extraction into other layers. Seems about as likely as us all living in a computer right?

Behind the scenes, in the places where scientists are currently looking for all the missing dark matter in the universe, are the underlying physical mechanisms that invisibly stitch together our observations (classical reality) with whatever ultimately lies beyond the great final output layer.

My guess: there’s nobody on the receiving end, just a rubber hose connecting “output” to “input.”

Published April 2, 2021 — 20:06 UTC

Covid 19

Facebook’s feckless ‘Fairness Flow’ won’t fix its broken AI

Facebook today posted a blog post detailing a three-year-old solution to its modern AI problems: an algorithm inspector that only works on some of the company’s systems.

Up front: Called Fairness Flow, the new diagnostic tool allows machine learning developers at Facebook to determine whether certain kinds of machine learning systems contain bias against or towards specific groups of people. It works by inspecting the data flow for a given model.

Per a company blog post:

To measure the performance of an algorithm’s predictions for certain groups, Fairness Flow works by dividing the data a model uses into relevant groups and calculating the model’s performance group by group. For example, one of the fairness metrics that the toolkit examines is the number of examples from each group. The goal is not for each group to be represented in exactly the same numbers but to determine whether the model has a sufficient representation within the data set from each group.

Other areas that Fairness Flow examines include whether a model can accurately classify or rank content for people from different groups, and whether a model systematically over- or underpredicts for one or more groups relative to others.

Background: The blog post doesn’t clarify exactly why Facebook’s touting Fairness Flow right now, but its timing gives a hint at what might be going on behind the scenes at the social network.

MIT Technology Review’s Karen Hao recently penned an article exposing Facebook’s anti bias efforts. Their piece makes the assertion that Facebook is motivated solely by “growth” and apparently has no intention of combating bias in AI where doing so would inhibit its ceaseless expansion.

Hao wrote:

It was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth.

In the wake of Hao’s article, Facebook’s top AI guru, Yann LeCun, immediately pushed back against the article and its reporting.

Facebook had allegedly timed the publishing of a research paper with Hao’s article. Based on LeCun’s reaction, the company appeared gobstruck by the piece. Now a scant few weeks later, we’ve been treated to a 2,500+ word blog post on Fairness Flow, a tool that addresses the exact problems Hao’s article discusses.

[Read: Facebook AI boss Yann LeCun goes off in Twitter rant, blames talk radio for hate content]

However, addresses might be too strong a word. Here’s a few snippets from Facebook’s blog post on the tool:

  • Fairness Flow is a technical toolkit that enables our teams to analyze how some types of AI models and labels perform across different groups. Fairness Flow is a diagnostic tool, so it can’t resolve fairness concerns on its own.
  • Use of Fairness Flow is currently optional, though it is encouraged in cases that the tool supports.
  • Fairness Flow is available to product teams across Facebook and can be applied to models even after they are deployed to production. However, Fairness Flow can’t analyze all types of models, and since each AI system has a different goal, its approach to fairness will be different.

Quick take: No matter how long and boring Facebook makes its blog posts, it can’t hide the fact that Fairness Flow can’t fix any of the problems with Facebook’s AI.

The reason bias is such a problem at Facebook is because so much of the AI at the social network is black box AI – meaning we have no clue why it makes the output decisions it does in a given iteration.

Imagine a game where you and all your friends throw your names in a hat and then your good pal Mark pulls one name out and gives that person a crisp five dollar bill. Mark does this 1,000 times and, as the game goes on, you notice that only your white, male friends are getting money. Mark never seems to pull out the name of a woman or non-white person.

Upon investigation, you’re convinced that Mark isn’t intentionally doing anything to cause the bias. Instead, you determine the problem must be occurring inside the hat.

At this point you have two decisions: number one, you can stop playing the game and go get a new hat. And this time, you try it out before you play again to make sure it doesn’t have the same biases.

Or you could go the route that Facebook’s gone: tell people that hats are inherently biased, and you’re working on new ways to identify and diagnose those problems. After that, just insist everyone keep playing the game while you figure out what to do next.

Bottom line: Fairness Flow is nothing more than an opt-in “observe and report” tool for developers. It doesn’t solve or fix anything.

Published March 31, 2021 — 17:38 UTC

Covid 19

Scientists used AI to link cryptomarkets with substance abusers on Reddit and Twitter

An international team of researchers recently developed an AI system that pieces together bits of information from dark web cryptomarkets, Twitter, and Reddit in order to better understand substance abusers.

Don’t worry, it doesn’t track sales or expose users. It helps scientists better understand how substance abusers feel and what terms they’re using to describe their experiences.

The relationship between mental health and substance abuse is well-studied in clinical environments, but how users discuss and interact with one another in the real world remains beyond the realm of most scientific studies.

According to the team’s paper:

Recent results from the Global Drug Survey suggest that the percentage of participants who have been purchasing drugs through cryptomarkets has tripled since 2014 reaching 15 percent of the 2020 respondents (GDS).

In this study, we assess social media data from active opioid users to understand what are the behaviors associated with opioid usage to identify what types of feelings are expressed. We employ deep learning models to perform sentiment and emotion analysis of social media data with the drug entities derived from cryptomarkets.

The team developed an AI to crawl three popular cryptomarkets where drugs are sold in order to determine nuanced information about what people were searching for and purchasing.

Then they crawled popular drug-related subreddits on Reddit such as r/opiates and r/drugnerds for posts related to the cryptomarket terminology in order to gather emotional sentiment. Where the researchers found difficulties in gathering enough Reddit posts with easy-to-label emotional sentiment, they found Twitter posts with relevant hashtags to fill in the gaps.

The end result was a data cornucopia that allowed the team to determine a robust emotional sentiment analysis for various substances.

In the future, the team hopes to find a way to gain better access to dark web cryptomarkets in order to create stronger sentiment models. The ultimate goal of the project is to help healthcare professionals better understand the relationship between mental health and substance abuse.

Per the team’s paper:

To identify the best strategies to reduce opioid misuse, a better understanding of cryptomarket drug sales that impact consumption and how it reflects social media discussions is needed.

Published March 30, 2021 — 21:03 UTC