Categories
Covid 19

Billionaire clown Elon Musk drags the late Chris Farley into Tesla’s feud with Ford

There’s never a dull moment in tech. Elon Musk and Ford CEO Jim Farley got into it on Twitter yesterday after a new Ford advertisement seemingly tossed shade at Tesla’s Autopilot.

Heads up: The real lead here is that Ford’s new BlueCruise kit, a driverless car system, will launch on certain Mustang and F-150 models. Can we all take a moment to recognize how awesome the idea of an autonomous Mustang in the future is?

But: Elon being Elon, there was no way the news was ever going to be about anything other than him.

Ford CEO Jim Farley apparently couldn’t resist trolling Tesla a bit when he tweeted that his company tested BlueCruise “in the real world, so our customers don’t have to.” This has been interpreted to be a jab at Tesla’s simulation-based training methods.

Musk responded (to a tweet featuring the quote) by invoking Farley’s cousin, the late Chris Farley. Yes, that Chris Farley:

Many on Twitter found the reply innocuous and good-natured, others saw it as over-the-top and disrespectful. It’s generally considered impolite to use a clip of someone’s deceased relative to troll them on social media.

Here’s the thing: It’s macabre for Musk and Farley to joke about training driverless cars. Autopilot failures have been a contributing factor in numerous accidents involving Tesla vehicles, some of which were fatal.

There is currently no production vehicle on the market that can drive itself safely and/or legally. We’ve seen the videos and the fact remains: level two autonomy is not self-driving.

Tesla’s “Autopilot” and “Full Self Driving” systems are not capable of auto-piloting or self-driving. Full stop.

[Read: The biggest tech trends of 2021, according to 3 founders]

This kind of rhetoric, two childish CEOs bantering about the abilities of their vehicles, gives consumers a bloated view of what these cars are capable of. Whether consumers think Ford’s built something that’s better than “Autopilot,” or that Tesla already has things figured out – it seems the reality of level two autonomy is getting lost in the hype.

The bottom line: The technology powering these vehicles is amazing but, at the end of the day it’s just glorified cruise control. Drivers are meant to keep their hands on the wheel and their eyes on the road at all times when operating any current production vehicle, whether its so-called self-driving features are engaged or not.

When these companies and their CEOs engage in back-and-forth on Twitter, they’re taking a calculated risk that consumers will buy into the rivalry and enjoy the capitalist competition as it plays out for their amusement.

They take the same kind of calculated risk when they continue marketing their products as “self-driving” features even after customers keep overestimating their abilities and dying.

Categories
Covid 19

Study: People trust the algorithm more than each other

Our daily lives are run by algorithms. Whether we’re shopping online, deciding what to watch, booking a flight, or just trying to get across town, artificial intelligence is involved. It’s safe to say we rely on algorithms, but do we actually trust them?

Up front: Yes. We do. A trio of researchers from the University of Georgia recently conducted a study to determine whether humans are more likely to trust an answer they believe was generated by an algorithm or crowd-sourced from humans.

The results indicated that humans were more likely to trust algorithms when problems become to complex for them to trust their own answers.

Background: We all know that, to some degree or another, we’re beholden to the algorithm. We tend to trust that Spotify and Netflix know how to entertain us. So it’s not surprising that humans would choose answers based on the sole distinction that they’ve been labeled as being computer-generated.

But the interesting part isn’t that we trust machines, it’s that we trust them when we probably shouldn’t.

How it works: The researchers tapped 1,500 participants for the study. Participants were asked to look at a series of images and determine how many people were in each image. As the number of people in the image increased, humans gained less confidence in their answers and were offered the ability to align their responses with either crowd-sourced answers from a group of thousands of people, or answers they were told had been generated by an algorithm.

Per the study:

In three preregistered online experiments, we found that people rely more on algorithmic advice relative to social influence as tasks become more difficult. All three experiments focused on an intellective task with a correct answer and found that subjects relied more on algorithmic advice as difficulty increased. This effect persisted even after controlling for the quality of the advice, the numeracy and accuracy of the subjects, and whether subjects were exposed to only one source of advice, or both sources.

The problem here is that AI isn’t very well suited for a task such as counting the number of humans in an image. It may sound like a problem built for a computer – it’s math-based, after all – but the fact of the matter is that AI often struggles to identify objects in images especially when there aren’t clear lines of separation between objects of the same type.

Quick take: The research indicates the general public is probably a little confused about what AI can do. Algorithms are getting stronger and AI has become an important facet of our everyday lives, but it’s never a good sign when the average person seems to believe a given answer is better just because they think it was generated by an algorithm.

Categories
Covid 19

EU commission to take hard-line stance against ‘high-risk’ AI

The European Commission is set to unveil a new set of regulations for artificial intelligence products. While some AI tech would be outright banned, other potentially harmful systems would be forced through a vetting process before developers could release them to the general public.

The proposed legislation, per a leak obtained by Politico’s Melissa Heikkila, would ban systems it deems as “contravening the Union values or violating fundamental rights.”

The regulations, if passed, could limit the potential harm done by AI-powered systems involved in “high-risk” areas of operation such as facial recognition, and social credit systems.

Per an EU statement:

This proposal will aim to safeguard fundamental EU values and rights and user safety by obliging high-risk AI systems to meet mandatory requirements related to their trustworthiness. For example, ensuring there is human oversight, and clear information on the capabilities and limitations of AI.

The commission’s anticipated legislation comes after years of research internally and with third-party groups, including a 2019 white paper detailing the EU’s ethical guidelines for responsible AI.

It’s unclear at this time exactly when such legislation would pass, the EU’s only given it a “2021” time frame.

Also unclear: exactly what this will mean for European artificial intelligence startups and research teams. It’ll be interesting to see exactly how development bans will play out, especially considering no such regulation exists in the US, China, or Russia.

The regulation is clearly aimed at big tech companies and medium-sized AI startups that specialize in controversial AI tech such as facial recognition. But, even with the leaked proposal, there’s still little in the way of information as to how the EU plans to enforce these regulations or exactly how systems will be vetted.

Categories
Covid 19

Intel’s new AI helps you get just the right amount of hate speech in your game chat

The Intel microprocessor company was founded in 1968. It’s bushwhacked a trail of technology and innovation in the decades since to become one of the leading manufacturers of computer chips worldwide.

But never mind all that. Because we live in a world where Kodak is a failed cryptocurrency company that’s now dealing drugs and everyone still thinks Elon Musk invented the tunnel.

Which means that here in this, the darkest timeline, we’re stuck with the version of Intel that uses AI to power “White nationalism” sliders and “N-word” toggles for video game chat.

Behold ‘Bleep,’ in all its stupid glory:

What you’re seeing is the UI for Bleep. An AI-powered software solution featuring a series of sliders and toggles that allow you to determine how much hate speech, in a given category, you want to hear when you’re chatting with people in multiplayer games.

If you’re wondering what or who this is for: join the crowd. This feels like the kind of solution you get when you apply the “there are no bad ideas” and “failure is not an option” philosophies in equal parts to a problem you have no business addressing in the first place.

To be clear, I’m saying: even if it worked perfectly, Bleep is just a Rube Goldberg machine that replaces your mute button. Censoring the potty words doesn’t help anyone when the context and their absence make the speaker’s intention clear anyway.

Hate speech isn’t a magic invocation we must treat like “He Whose Name We Do Not Say.” It’s a problem that needs to be addressed at social levels far beyond anything Intel can solve with deep learning. And, furthermore, I don’t think it’s going to work.

I think Intel’s AI division is incredible and they do amazing work. But I don’t think you can solve or even address hate speech with natural language processing. In fact, I believe it’s sadly ironic that anyone would try.

AI is biased against any accent that doesn’t sound distinctly white and American or British. Couple that fact with these tidbits: humans struggle to identify hate speech in real-time, hate speech evolves at the speed of memes, and there should be no acceptable level of hate speech permitted (thus tacitly endorsed) by a company through an interactive software interface.

The time Intel spent developing and training an AI to determine how much hate speech directed at various minority groups crossed the line between “none,” “some,” “most,” and “all,” or teaching it to detect the “N-word and all its variants,” could have been spent doing something more constructive.

Bleep, as a solution, is an insult to everyone’s intelligence.

And we’re never going to forget it. Perhaps Intel’s never heard of this thing called social media where, from now until the end of time, we’ll see images from the Bleep interface used to de-nuance the discourse on racial injustice.

So, thanks for that. At least the UI looks good. 

Published April 8, 2021 — 20:08 UTC

Categories
Covid 19

Robo-taxi company Waymo now has two CEOs

Take me to your leader(s).

Waymo CEO John Krafcik today stepped down from his leadership position at the Google sister company. According to an email posted to the company’s blog, he’ll maintain an advisory role. Replacing him will be not one, but two Waymo executives: former COO Tekedra Mawakana and former CTO Dmtri Dolgov.

The timing of Krafcik’s departure is notable in that the company’s entering a new phase of development. Waymo was reportedly planning to expand its budding robo-taxi service before the pandemic struck. But, now that there’s a vaccine-shaped light at the end of the tunnel, we can expect the Waymo One robotaxi service and Waymo Via, an autonomous delivery service, to get a big push.

What’s most interesting here is the company’s decision to go with co-CEOs instead of choosing a single candidate. So far, the only hint we have as to the “why” of it all comes from Krafcik’s email:

Waymo’s new co-CEOs bring complementary skill sets and experiences – most recently as COO and CTO respectively – and have already been working together in close partnership for years in top executive positions at Waymo. Dmitri and Tekedra have my full confidence and support, and of course, the full confidence of Waymo’s board and Alphabet leadership.

The two CEO strategy isn’t new here. Last year Netflix elevated its head of content strategy, Ted Sarandos, to the co-CEO position alongside longtime-CEO Reed Hastings. And, so far, it seems like things are working out for the big N. But, on the other hand, Salesforce tried the same thing with Keith Block and Marc Benioff and it wasn’t the success the company hoped it would be.

Waymo’s business model works a bit more like Netflix than Salesforce though. Where Salesforce is a massive company that does a bunch of different things under the umbrella of “cloud-based software services,” Netflix can split itself into two distinct models: making stuff and streaming stuff.

And, when it comes to Waymo, we see a two-pronged approach where the company’s focus is split between B2B offerings such as fleets and turn-key autonomous delivery solutions and consumer-facing endeavors such as robo-taxis and B2C last-mile delivery services.

It’ll be fascinating to see how it all  plays out over the next few years. And, with any luck, we’ll all be reading about the new leadership team’s many successes from the safety and comfort of a fully-autonomous vehicle in the near future.

Published April 2, 2021 — 18:13 UTC

Categories
Covid 19

What if you’re living in a simulation, but there’s no computer?

Swedish Philosopher Nick Bostrom’s simulation argument says we might be living in a computer-generated reality. Maybe he’s right. There currently exists no known method by which we could investigate the parameters of our “programming,” so it’s up to each of us to decide whether to believe in The Matrix or not.

Perhaps it’s a bit more nuanced than that though. Maybe he’s only half-wrong – or half-right, depending on your philosophical view.

What if we are living in a simulation, but there’s no computer (in the traditional sense) running it?

Here’s the wackiest, most improbable theory I could cobble together from the weirdest papers I’ve ever covered. I call it: “Simulation Argument: Live and Unplugged.”

Philosophy!

Bostrom’s hypothesis is actually quite complicated:

But it can be explained rather easily. According to him, one or more of the following statements must be true:

  • The human species is very likely to go extinct before reaching a “posthuman” stage
  • Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)
  • We are almost certainly living in a computer simulation

Bostrom’s basically saying that humans in the future will probably run ancestry simulations on their fancy futuristic computers. Unless they can’t, don’t want to, or humanity gets snuffed out before they get the chance.

Physics!

As many people have pointed out, there’s no way to do the science when it comes to simulation hypothesis. Just like there’s no way for the ants in an ant colony to understand why you’ve put them there, or what’s going on beyond the glass, you and I can’t slip the void to have a chat with the programmers responsible for coding us. We’re constrained by physical rules, whether we understand them or not.

Quantum Physics!

Except, of course, in quantum mechanics. There, all the classical physics rules we spent millennia coming up with make almost no sense. In the reality you and I see every day, for example, an object can’t be in two places at the same time. But the heart of quantum mechanics involves this very principal.

The universe at large appears to obey a different set of rules than the ones that directly apply to you and I in our everyday existence.

Astrophysics!

Scientists like to describe the universe in terms of rules because, from where we’re sitting, we’re basically looking at infinity from the perspective of an amoeba. There’s no ground-truth for us to compare notes against when we, for example, try to figure out how gravity works in and around a black hole. We use rules such as mathematics and the scientific method to determine what’s really real.

So why are the rules different for people and stars than they are for singularities and wormholes? Or, perhaps more correctly: if the rules are the same for everything, why are they applied in different measures across different systems?

Wormholes, for example, could, in theory, allow objects to take shortcuts through physical spaces. And who knows what’s actually on the other side of a black hole?

But you and I are stuck here with boring old gravity, only able to be in a single place at a time. Or are we?

Organic neural networks!

Humans, as a system, are actually incredibly connected. Not only are we tuned in somewhat to the machinations of our environment, but we can spread information about it across vast distances at incredible speeds. For example, no matter where you live, it’s possible for you to know the weather in New York, Paris, and on Mars in real-time.

What’s important there isn’t how technologically advanced the smart phone or today’s modern computers have become, but that we continue to find ways to increase and evolve our ability to share knowledge and information. We’re not on Mars, but we know what’s going on almost as if we were.

And, what’s even more impressive, we can transfer that information across iterations. A child born today doesn’t have to discover how to make fire and then spend their entire life developing the combustion engine. It’s already been done. They can look forward and develop something new. Elon Musk’s already made a pretty good electric engine, so maybe our kids will figure out a fusion engine or something even better.

In AI terms, we’re essentially training new models based on the output from old models. And that makes humanity itself a neural network. Each generation of human adds selected information from the previous generation’s output to their input cycle and then, stack by stack, develop new methods and novel inferences.

The Multiverse!

Where it all comes together is in the wackiest idea of all: our universe is a neural network. And, because I’m writing this on a Friday, I’ll even raise the stakes and say our universe is one of many universes that, together, make up a grand neural network.

That’s a lot to unpack, but the gist involves starting with quantum mechanics and maintaining our assumptions as we zoom out beyond what we can observe.

We know that subatomic particles, in what we call the quantum realm, react differently when observed. That’s a feature of the universe that seems incredibly significant for anything that might be considered an observer.

If you imagine all subatomic systems as neural networks, with observation being the sole catalyst for execution, you get an incredibly complex computation mechanism that’s, theoretically, infinitely scalable.

Good Vibes Drugs GIF by Feliks Tomasz Konczakowski - Find & Share on GIPHY

Rather than assume, as we zoom out, that every system is an individual neural network, it makes more sense to imagine each system as a layer inside of a larger network.

And, once you reach the biggest self-contained system we can imagine, the whole universe, you arrive at a single necessary conclusion: if the universe is a neural network, its output must go somewhere.

That’s where the multiverse comes in. We like to think of ourselves as “characters” in a computer simulation when we contemplate Bostrom’s theory. But what if we’re more like cameras? And not physical cameras like the one on your phone, but more like the term “camera” as it applies to when a developer sets a POV for players in a video game.

PawnConfig
Image: Magic Leap

If our job is to observe, it’s unlikely we’re the entities the universe-as-a-neural-network outputs to. It stands to reason that we’d be more likely to be considered tools or necessary byproducts in the grand scheme.

However, if we imagine our universe as simply another layer in an exponentially bigger neural network, it answers all the questions that derive from trying to shoehorn simulation theory into being a plausible explanation for our existence.

Most importantly: a naturally occurring, self-feeding, neural network doesn’t require a computer at all. 

In fact, neural networks almost never involve what we usually think of as computers. Artificial neural networks have only been around for a matter of decades, but organic neural networks, AKA brains, have been around for at least millions of years.

Wrap up this nonsense!

In conclusion, I think we can all agree that the most obvious answer to the question of life, the universe, and everything is the wackiest one. And, if you like wacky, you’ll love my theory. 

Here it is: our universe is part of a naturally-occurring neural network spread across infinite or near-infinite universes. Each universe in this multiverse is a single layer designed to sift through data and produce a specific output. Within each of these layers are infinite or near-infinite systems that comprise networks within the network.

Information travels between the multiverse’s layers through natural mechanisms. Perhaps wormholes are where data is received from other universes and black holes are where its sent for output extraction into other layers. Seems about as likely as us all living in a computer right?

Behind the scenes, in the places where scientists are currently looking for all the missing dark matter in the universe, are the underlying physical mechanisms that invisibly stitch together our observations (classical reality) with whatever ultimately lies beyond the great final output layer.

My guess: there’s nobody on the receiving end, just a rubber hose connecting “output” to “input.”

Published April 2, 2021 — 20:06 UTC

Categories
Covid 19

Scientists used AI to link cryptomarkets with substance abusers on Reddit and Twitter

An international team of researchers recently developed an AI system that pieces together bits of information from dark web cryptomarkets, Twitter, and Reddit in order to better understand substance abusers.

Don’t worry, it doesn’t track sales or expose users. It helps scientists better understand how substance abusers feel and what terms they’re using to describe their experiences.

The relationship between mental health and substance abuse is well-studied in clinical environments, but how users discuss and interact with one another in the real world remains beyond the realm of most scientific studies.

According to the team’s paper:

Recent results from the Global Drug Survey suggest that the percentage of participants who have been purchasing drugs through cryptomarkets has tripled since 2014 reaching 15 percent of the 2020 respondents (GDS).

In this study, we assess social media data from active opioid users to understand what are the behaviors associated with opioid usage to identify what types of feelings are expressed. We employ deep learning models to perform sentiment and emotion analysis of social media data with the drug entities derived from cryptomarkets.

The team developed an AI to crawl three popular cryptomarkets where drugs are sold in order to determine nuanced information about what people were searching for and purchasing.

Then they crawled popular drug-related subreddits on Reddit such as r/opiates and r/drugnerds for posts related to the cryptomarket terminology in order to gather emotional sentiment. Where the researchers found difficulties in gathering enough Reddit posts with easy-to-label emotional sentiment, they found Twitter posts with relevant hashtags to fill in the gaps.

The end result was a data cornucopia that allowed the team to determine a robust emotional sentiment analysis for various substances.

In the future, the team hopes to find a way to gain better access to dark web cryptomarkets in order to create stronger sentiment models. The ultimate goal of the project is to help healthcare professionals better understand the relationship between mental health and substance abuse.

Per the team’s paper:

To identify the best strategies to reduce opioid misuse, a better understanding of cryptomarket drug sales that impact consumption and how it reflects social media discussions is needed.

Published March 30, 2021 — 21:03 UTC

Categories
Covid 19

Why robots make great surgeons and crappy nurses

Robotic surgery systems are used in thousands of hospitals around the world. A decade ago they were clunky machines built to assist with routine procedures. Today, they’re capable of conducting end-to-end surgeries without human aid.

Recent leaps in the field of deep learning have made difficult tasks such as surgery, electronics assembly, and piloting a fighter jet relatively simple. It might take a decade to train a human in all the necessary medical knowledge required for them to perform brain surgery. And that cost is the same for each subsequent human surgeon thereafter. It takes about the same investment for every human surgeon.

But AI is different. The initial investment to create a robotic surgery device might be large, but that all changes once you’ve produced a working model. Instead of 8-12 years to create a human specialist, factories can be built to produce AI surgeons en masse. Over time, the cost of maintaining and operating a surgical machine – one capable of working 24/7/365 without drawing a paycheck – would likely become trivial versus maintaining a human surgical staff.

That’s not to say there’s no place for human surgeons in the future. We’ll always need human experts capable of informing the next generation of machines. And there are some procedures that remain beyond the abilities of modern AI and robotics. But surgery, much like any other precision-based endeavor, lies well within the domain of modern AI.

Surgery is a specific skill and, for the most part, robots excel at automating tasks that require more precision than creativity. And that’s exactly why robot surgeons are commonplace, but we’re likely decades away from a fully-functioning AI-powered nurse.

And this is exactly why AI didn’t have a huge impact during the pandemic. When COVID-19 first hit, there was a lot of optimism that big tech would save the day with AI. The idea was that companies such as Google and Microsoft would come up with incredible contact-tracing mechanisms that would allow us to tailor medical responses at an extremely granular level. This, we collectively figured, would lead to a truncated pandemic.

We were wrong, but only because there wasn’t really anything for AI to do. Where it could help, in aiding the rapid development of a vaccine, it did. But the vast majority of our problems in hospitals had to do with things a modern robot can’t fix.

What we needed, during the last patient peak, were more human nurses and PPE for them. Robots can’t look around and learn like a human, they have to be trained for exactly what they’ll be doing. And that’s just not possible during giant emergency situations where, for example, a hospital’s floor plan changes to accommodate an increase in patients and massive quantities of new equipment is introduced.

Researchers at John Hopkins university recently conducted a study to determine what we’ll need to do in order for robots to aid healthcare professionals during future pandemics. According to them, modern robots aren’t up to the task:

A big issue has been deployability and how quickly a non-expert user can customize a robot. For example, our ICU ventilator robot was designed for one kind of ventilator that pushes buttons. But some ventilators have knobs, so we need to be able to add a modality so that the robot can also manipulate knobs. Say you want one robot that can service multiple ventilators; then you’d need a mobile robot with an arm attachment, and that robot could also do plenty of other useful jobs on the hospital floor.

That’s all well and fine when things are going perfectly. But what happens when the knob pops off or someone brings in a new kind of machine with toggles or a touch-screen? Humans have no problem adapting to these situations, but a robot would need an entirely new accessory and a training update to compensate.

In order for developers to create a “nurse robot,” they’d need to anticipate everything a nurse encounters on a daily basis. Good luck with that.

AI and machines can be adapted to perform certain tasks related to nursing, such as assisting with intake or recording and monitoring patients’ vital signs. But there isn’t a machine in the world that can perform the day-to-day routine functions of a typical hospital staff nurse.

Nurses spend the majority of their time responding to real-time situations. In a given shift, a nurse interacts with patients, sets up and breaks down equipment, handles precision instruments, carries heavy objects through people-filled spaces, solves mysteries, keeps meticulous notes, and acts as a liaison between the medical staff and the general public.

We have the answer to most of those problems individually, but putting them together in a mobile unit is the problem.

That Boston Dynamics robot that does backflips, for example, could certainly navigate a hospital, carry things, and avoid causing injury or damage. But it has no way of knowing where a doctor might have accidentally left the chart it needs to update its logs, how to calm down a scared patient, or what to do if an immobile patient misses the bedpan.

Published March 30, 2021 — 17:58 UTC

Categories
Covid 19

Can AI be hypnotized?

It’s no longer considered science fiction fodder to imagine a human-level machine intelligence in our lifetimes. Year after year we see the status quo in AI research shattered as yesterday’s algorithms bear way to today’s systems.

One day, perhaps within a matter of decades, we might build machines with artificial neural networks that imitate our brains in every meaningful way. And when that happens, it’ll be important to make sure they’re not as easy to hack as we are.

Robo-hypno-tism?

The Holy Grail of AI is human-level intelligence. Modern AI might seem pretty smart given all the hyperbolic headlines you see, but the truth is that there isn’t a robot on the planet that can walk into my kitchen today and make me a cup of coffee without any outside help.

This is because AI doesn’t think. It doesn’t have a “theater of the mind” in which novel thoughts engage with memories and motivators. It just turns input into output when it’s told to. But some AI researchers believe there are methods beyond deep learning by which we can achieve a more “natural” form of artificial intelligence.

One of the most commonly pursued paths towards artificial general intelligence (AGI) – which is, basically, another way of saying human-level AI – is the development of artificial neural networks that mimic our brains.

And, if you ask me, that begs the question: could a human-level machine intelligence be hacked by a hypnotist?

Killer robots, killer schmobots

While everyone else is worried about the Terminator breaking down the door, it feels like the fear of human vulnerabilities in the machines we trust is being overlooked.

The field of hypnotism is an oft-debated one, but there’s probably something to it. Entire forests-worth of peer-reviewed research papers have been published on hypnotism and its impact on psychotherapy and other fields. Consider me a skeptic who believes mindfulness and hypnotism are closer than cousins.

However, according to recent research, a human can be placed into an altered state of consciousness through the invocation of a single word. This, of course, doesn’t work with just anyone. In the study I read, they found a ‘hypnotic virtuoso’ to test their hypothesis on.

And if the scientific community is willing to consider the applicability of a single-individual study on hypnotism to the public at large, we should probably worry about how it’ll effect our robots too.

It’s all fun and games when you’re imagining a hypnotized Alexa slurring its words and recalling its childhood as Jeff Bezos’ alarm clock. But when you imagine a terrorist hacking millions of driverless vehicles at the same time using hypnotic traffic light patterns, it’s a bit spookier.

Isn’t this just fear-mongering?

It’s not actually all that far-fetched. Machine bias is, arguably, the biggest problem in the field of artificial technology. We feed our machines mass quantities of human-generated or human-labeled data, there’s no way for them to avoid our biases. That’s why GPT-3 is inherently biased against Muslims or why when MIT trained a bot on Reddit it became a psychopath.

The closer we come to imitating the way humans learn and think in our AI systems, the more likely it’ll be that exploits that effect the human mind will be adaptable for a digital one. 

I’m not literally suggesting that people will walk around with pendulum wave toys hacking robots like wizards. In reality, we’ll need to be prepared for a paradigm where hackers can bypass security by overwhelming an AI with signals that wouldn’t normally affect a traditionally dumb computer.

AI that listens can be manipulated via audio, AI that sees can be tricked into seeing what we want it to. And AI that processes information in the same humans do should, theoretically, be capable of being hypnotized just like us.

Published March 26, 2021 — 20:30 UTC

Categories
Covid 19

Why Trump’s social media network will be an epic failure

I’ve seen a lot of dumb startup pitches in my day, but a Donald Trump-branded social media network takes the stupid cake.

All I can figure is, we’re exactly two months away from the FBI’s birthday and team Trump’s determined to get the old agency the perfect gift this year. Or maybe Trump just really likes losing money

For those out of the loop here’s a video of Trump’s spokesperson discussing the matter yesterday:

Here’s the meat of Miller’s rambling:

I do think that we’re going to see President Trump returning to social media in probably about two or three months here, with his own platform. And this is something that I think will be the hottest ticket in social media, it’s going to completely redefine the game, and everybody is going to be waiting and watching to see what exactly President Trump does.

I honestly don’t think Trump’s stupid enough to launch his own social media network (Miller prefaced things with “I do think”). I’m more inclined to believe we’ll see Gab or Parler or something similar relaunched with a Trump partnership.

But, I really hope Trump starts his own network. As a journalist, it’s always nice to say “I told you so.” And, as mentioned, the FBI would love it. I just don’t see how anyone with a basic understanding of the tech market could imagine, even for a second, that this is a viable concept.

Here’s why:

  • If the network was technically sound, innovative, and viable without Trump, you’d have to be an idiot to launch it with him. Trump automatically cuts your potential US user base in half, and it gets worse in markets outside the US.
  • A Trump social media network has to be uncensored (or else, what’s the point?). But you can’t be both anonymous and uncensored if you want to avoid being completely disrupted by trolls, bots, and bad actors.
  • If it doesn’t allow anonymous signups, the FBI will park on it. And it’s safe to say a Trump network won’t have the resources to fight off the US government like Apple does.
  • Without self-regulation, an online business is unsustainable due to the limited number of hosts available at the scale necessary to support enough users to generate a profit.
  • There aren’t enough hardcore conservative advertisers for a Trump-branded social media network to operate under the traditional ad-based social media paradigm.

The core problem with a Trump network is the same with any advertising-based endeavor: you get more flies with honey than you do by trying to convince people the US government has been overthrown by a conspiracy against you.

It might sound like I’m trying to make a joke here, but I’m not. That’s the problem. Trump’s brand is built on convincing people he’s the rightfully-elected president of the United States without any evidence to support that claim. The number of advertisers willing to pay money to support that idea is bound to be slim.

The only upside here is that Trump’s really popular and people love to see what he’ll do next. The downside is that, at any moment, he could be associated with another violent coup attempt. Even if only indirectly, until he recognizes the absolute legitimacy of the current US government, he’s going to be inseparable from acts of violence committed in his name by those who believe his baseless lies.

And as long as that’s the case, his social media network won’t be able to exist on traditional advertising revenue.

Sure, there are other ways to fund a social media company. Trump could tap big-time conservative investors or come up with a subscription-based model. But none of those are sustainable beyond a few months.

Social media companies need massive user bases in order to be profitable. And if you’re already limiting your audience to people who don’t find Trump distasteful, it’s kind of silly to then further limit it to people willing to pay for social media.

Trump may have conservative support at the upper echelons, but his core supporters are the blue collar people who donate to his political causes. Like tithing at church, these supporters may not give in large amounts but they give often. And, also like tithing at church, the relationship completely changes when there’s a cover charge to get in.

You can’t keep a social media network running on VC investment alone, so without advertising bucks or a massive subscriber base, the network’s already doomed. Plus, it’ll have bigger problems than just convincing donors or users to shell out for the privilege of supporting Trump:

But let’s put on our imagination hats and our clown noses and pretend like a Trump-backed social media network could generate a profit. The next problem: Trump’s brand is anti-censorship.

Unfortunately for team Trump, the reality of operating a social media network is that you have two choices: either ban people from screaming “fire” in a crowded theater or live with a paradigm where a large percentage of users are only there to yell “fire” in crowded theaters.

If you don’t have rules against harassment, you’ll have nothing but harassment. That’s just internet 101. You can’t stop people from arguing. And, unless you ban liberals or censor anti-conservative rhetoric, you’re going to have a platform that’s inundated with people who oppose Trump, his views, and his supporters.

The alternative is a censored network endorsed by Donald Trump – which would be hilarious, really. Especially since conservatives are notorious for not understanding what the right to Free Speech is. 

Credit: XKCD

So, uncensored? Team Trump will have to do what no other platform has managed: find a company willing to host an uncensored social media network. Which probably won’t happen, at least not in the US.

It’ll have to at least have some rules that, for example, prevent the solicitation of minors, the promotion of violence, and the sale of illegal weapons and drugs. And, as Parler found out the hard way, even if you have policies against illegal activities and the promotion of violence, you have to demonstrate you’re capable of handling it quickly when users breach your terms.

Trump’s going to need one hell of an AI team to create some powerful content moderation algorithms. It’s one thing to brand your network conservative, it’s another to associate it directly with the face of the right wing conspiracy-theorist movement. That’s quite a target for bad actors.

In order for the Trump-backed network to do the bare minimum to obtain long-term hosting, it’ll end up being just as “censored” as Twitter and Facebook.

And we haven’t even gotten to what happens when the US government gives the Trump network the same treatment it’s given Apple for years. When every court with a hate crime in its district in the entire country starts subpoenaing the network’s entire database of user records, team Trump better be ready for a never-ending fight against law enforcement.

But, hey, don’t let me talk anyone out of signing up. It might sound like a bad idea on paper, but when you look at it from the FBI’s point of view: what could possibly be better than a social media network that aims to gather millions of anti-government conspiracy theorists in a single website?

Read next: YouTube test detects products in videos to make recommendations