Covid 19

US banks turn to AI to tell homeless people to go away — along with fraud prevention and stuff

Banks have long embraced surveillance systems to prevent robbery. But they’re also using the technology to monitor customers, workers, and homeless people.

Several US banking giants are implementing AI cameras to analyze customer preferences, track what staff are doing, and observe activities around their premises, Reuters reports.

The tools are being used for a variety of purposes. Wells Fargo is leveraging the tech to prevent fraud, while City National plans to deploy facial recognition near ATMs as authentication methods.

JPMorgan, meanwhile, has been using computer vision to analyze archive footage of customer behavior. Their early analysis found that more men arrive before or after lunch, while women are more likely to visit in mid-afternoon.

[Read: The biggest tech trends of 2021, according to 3 founders]

One unnamed bank is using AI to arrange their layouts more ergonomically, while another is monitoring homeless people setting up tents at drive-through ATMs. An executive told Reuters that staff can play an audio recording “politely asking” the loiterers to leave. Sounds delightful.

All these new applications of AI come amid growing concerns about AI-powered surveillance.

Biometric scans can encroach on democratic freedoms, and facial recognition is notorious for misidentifying people of color, women, and trans people.

Critics have also noted that consumer monitoring can lead to income and racial discrimination. In 2020, the drug store chain Rite Aid shut down its facial recognition system after it was found to be mostly installed in lower-income, non-white neighborhoods.

Bank executives told Reuters that they were sensitive to these issues, but a backlash from customers and staff could stall their plans. Their deployments will also be restricted by a growing range of local laws.

A number of US cities have recently prohibited the use of facial recognition, including Portland, which last year banned the tech in all privately-owned places accessible to the public. The Oregon Bankers Association has asked for an exemption, but their request was rejected.

Still, in most places in the US banks are free to roll out AI monitoring tools. It’s another step in the sleepwalk towards surveillance capitalism.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Covid 19

NASA just made history by flying an autonomous helicopter on Mars

NASA has made history after successfully conducting the first-ever controlled flight on another planet.

The space agency’s Ingenuity helicopter briefly flew over Mars this morning, in what NASA previously described as a “Wright Brothers” moment.

The 1.8 kg chopper ascended three meters above the red planet, hovered for around 30 seconds, made a turn, and then touched back down on the Martian surface.

“We can now say that human beings have flown a rotorcraft on another planet,” said MiMi Aung, Ingenuity Mars Helicopter project manager at NASA’s Jet Propulsion Laboratory (JPL).

Credit: NASA/JPL-Caltech
Ingenuity took this photo while hovering over Mars.

[Read: The biggest tech trends of 2021, according to 3 founders]

The autonomous drone arrived on Mars inside NASA’s Perseverance rover on February 18. It was slated to make its first experimental flight on April 11, but the launch was twice postponed due to technical issues.

At around 07:00AM EST, NASA confirmed that the maiden voyage had been successful. Ingenuity will now attempt a series of more challenging flights.

While pilots planned the chopper’s route, Ingenuity had to fly autonomously because of communication delays. A combination of sensors and computer vision navigated the flight path.

Cameras on the helicopter will capture a new perspective of conditions on Mars. But Ingenuity’s primary mission is testing the potential of flying on other worlds.

NASA will use insights from the flights to develop future helicopters, which could one day help astronauts explore the red planet.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Covid 19

Here’s why we should never trust AI to identify our emotions

Imagine you are in a job interview. As you answer the recruiter’s questions, an artificial intelligence (AI) system scans your face, scoring you for nervousness, empathy and dependability. It may sound like science fiction, but these systems are increasingly used, often without people’s knowledge or consent.

Emotion recognition technology (ERT) is in fact a burgeoning multi-billion-dollar industry that aims to use AI to detect emotions from facial expressions. Yet the science behind emotion recognition systems is controversial: there are biases built into the systems.

[Read: Can AI read your emotions? Try it for yourself]

Many companies use ERT to test customer reactions to their products, from cereal to video games. But it can also be used in situations with much higher stakes, such as in hiring, by airport security to flag faces as revealing deception or fear, in border control, in policing to identify “dangerous people” or in education to monitor students’ engagement with their homework.

Shaky scientific ground

Fortunately, facial recognition technology is receiving public attention. The award-winning film Coded Bias, recently released on Netflix, documents the discovery that many facial recognition technologies do not accurately detect darker-skinned faces. And the research team managing ImageNet, one of the largest and most important datasets used to train facial recognition, was recently forced to blur 1.5 million images in response to privacy concerns.

Revelations about algorithmic bias and discriminatory datasets in facial recognition technology have led large technology companies, including Microsoft, Amazon and IBM, to halt sales. And the technology faces legal challenges regarding its use in policing in the UK. In the EU, a coalition of more than 40 civil society organisations have called for a ban on facial recognition technology entirely.

Like other forms of facial recognition, ERT raises questions about bias, privacy and mass surveillance. But ERT raises another concern: the science of emotion behind it is controversial. Most ERT is based on the theory of “basic emotions” which holds that emotions are biologically hard-wired and expressed in the same way by people everywhere.

This is increasingly being challenged, however. Research in anthropology shows that emotions are expressed differently across cultures and societies. In 2019, the Association for Psychological Science conducted a review of the evidence, concluding that there is no scientific support for the common assumption that a person’s emotional state can be readily inferred from their facial movements. In short, ERT is built on shaky scientific ground.

Also, like other forms of facial recognition technology, ERT is encoded with racial bias. A study has shown that systems consistently read black people’s faces as angrier than white people’s faces, regardless of the person’s expression. Although the study of racial bias in ERT is small, racial bias in other forms of facial recognition is well-documented.

There are two ways that this technology can hurt people, says AI researcher Deborah Raji in an interview with MIT Technology Review: “One way is by not working: by virtue of having higher error rates for people of color, it puts them at greater risk. The second situation is when it does work — where you have the perfect facial recognition system, but it’s easily weaponized against communities to harass them.”

So even if facial recognition technology can be de-biased and accurate for all people, it still may not be fair or just. We see these disparate effects when facial recognition technology is used in policing and judicial systems that are already discriminatory and harmful to people of colour. Technologies can be dangerous when they don’t work as they should. And they can also be dangerous when they work perfectly in an imperfect world.

The challenges raised by facial recognition technologies – including ERT – do not have easy or clear answers. Solving the problems presented by ERT requires moving from AI ethics centred on abstract principles to AI ethics centred on practice and effects on people’s lives.

When it comes to ERT, we need to collectively examine the controversial science of emotion built into these systems and analyse their potential for racial bias. And we need to ask ourselves: even if ERT could be engineered to accurately read everyone’s inner feelings, do we want such intimate surveillance in our lives? These are questions that require everyone’s deliberation, input and action.

Citizen science project

ERT has the potential to affect the lives of millions of people, yet there has been little public deliberation about how – and if – it should be used. This is why we have developed a citizen science project.

On our interactive website (which works best on a laptop, not a phone) you can try out a private and secure ERT for yourself, to see how it scans your face and interprets your emotions. You can also play games comparing human versus AI skills in emotion recognition and learn about the controversial science of emotion behind ERT.

Most importantly, you can contribute your perspectives and ideas to generate new knowledge about the potential impacts of ERT. As the computer scientist and digital activist Joy Buolamwinisays: “If you have a face, you have a place in the conversation.”The Conversation

This article by Alexa Hagerty, Research Associate of Anthropology, University of Cambridge and Alexandra Albert, Research Fellow in Citizen Social Science, UCL, is republished from The Conversation under a Creative Commons license. Read the original article.

Covid 19

New to computer vision and medical imaging? Start with these 10 projects

(AI) and computer science that enables automated systems to see, i.e. to process images and video in a human-like manner to detect and identify objects or regions of importance, predict an outcome or even alter the image to a desired format [1]. Most popular use cases in the CV domain include automated perception for autonomous drive, augmented and virtual realities (AR, VR) for simulations, games, glasses, reality, and fashion or beauty-oriented e-commerce.

Medical image (MI) processing on the other hand involves much more detailed analysis of medical images that are typically grayscale such as MRI, CT, or X-ray images for automated pathology detection, a task that requires a trained specialist’s eye for detection. Most popular use cases in the MI domain include automated pathology labeling, localization, association with treatment or prognostics, and personalized medicine.

Prior to the advent of deep learning methods, 2D signal processing solutions such as image filtering, wavelet transforms, image registration, followed by classification models [2–3] were heavily applied for solution frameworks. Signal processing solutions still continue to be the top choice for model baselining owing to their low latency and high generalizability across data sets.

However, deep learning solutions and frameworks have emerged as a new favorite owing to the end-to-end nature that eliminates the need for feature engineering, feature selection and output thresholding altogether. In this tutorial, we will review “Top 10” project choices for beginners in the fields of CV and MI and provide examples with data and starter code to aid self-paced learning.

CV and MI solution frameworks can be analyzed in three segments: Data, Process, and Outcomes [4]. It is important to always visualize the data required for such solution frameworks to have the format “{X,Y}”, where X represents the image/video data and Y represents the data target or labels. While naturally occurring unlabelled images and video sequences (X) can be plentiful, acquiring accurate labels (Y) can be an expensive process. With the advent of several data annotation platforms such as [5–7], images and videos can be labeled for each use case.

Since deep learning models typically rely on large volumes of annotated data to automatically learn features for subsequent detection tasks, the CV and MI domains often suffer from the “small data challenge”, wherein the number of samples available for training a machine learning model is several orders lesser than the number of model parameters.

The “small data challenge” if unaddressed can lead to overfit or underfit models that may not generalize to new unseen test data sets. Thus, the process of designing a solution framework for CV and MI domains must always include model complexity constraints, wherein models with fewer parameters are typically preferred to prevent model underfitting.

Finally, the solution framework outcomes are analyzed both qualitatively through visualization solutions and quantitatively in terms of well-known metrics such as precision, recall, accuracy, and F1 or Dice coefficients [8–9].

The projects listed below present a variety in difficulty levels (difficulty levels Easy, Medium, Hard) with respect to data pre-processing and model building. Also, these projects represent a variety of use cases that are currently prevailing in the research and engineering communities. The projects are defined in terms of the: Goal, Methods, and Results.

Project 1: MNIST and Fashion MNIST for Image Classification (Level: Easy)

Goal: To process images (X) of size [28×28] pixels and classify them into one of the 10 output categories (Y). For the MNIST data set, the input images are handwritten digits in the range 0 to 9 [10]. The training and test data sets contain 60,000 and 10,000 labeled images, respectively. Inspired by the handwritten digit recognition problem, another data set called the Fashion MNIST data set was launched [11] where the goal is to classify images (of size [28×28]) into clothing categories as shown in Fig. 1.

Fig 1: The MNIST and Fashion MNIST data sets with 10 output categories each. (Image by Author)

Methods: When the input image is small ([28×28] pixels) and images are grayscale, convolutional neural network (CNN) models, where the number of convolutional layers can vary from single to several layers are suitable classification models. An example of MNIST classification model build using Keras is presented in the colab file:

MNIST colab file

Another example of classification on the Fashion MNIST data set is shown in:

Fashion MNIST Colab file

In both instances, the key parameters to tune include the number of layers, dropout, optimizer (Adaptive optimizers preferred), learning rate, and kernel size as seen in the code below. Since this is a multi-class problem, the ‘softmax’ activation function is used in the final layer to ensure only 1 output neuron gets weighted more than the others.

Results: As the number of convolutional layers increases from 1–10, the classification accuracy is found to increase as well. The MNIST data set is well studied in literature with test accuracies in the range of 96–99%. For the Fashion MNIST data set, test accuracies are typically in the range 90–96%. An example of visualization of the MNIST classification outcome using CNN models is shown in Fig 2 below (See visualization at front end here).

Fig. 2: Example of visualizing the outcome of CNN model for MNIST data. The input is shown in the top left corner and the respective layer activations are shown. The final result is between 5 and 8.

Project 2: Pathology Classification for Medical Images (Level: Easy)

Goal: To classify medical images (acquired using Optical Coherence Tomography, OCT) as Normal, Diabetic Macular Edema (DME), Drusen, choroidal neovascularization (CNV) as shown in [12]. The data set contains about 84,000 training images and about 1,000 test images with labels and each image has a width of 800 to 1,000 pixels as shown in Fig 2.

Fig 2: Examples of OCT images from the Kaggle Dataset in [12].

Methods: Deep CNN models such as Resnet and CapsuleNet [12] have been applied to classify this data set. The data needs to be resized to [512×512] or [256×256] to be fed to standard classification models. Since medical images have lesser variations in object categories per image frame when compared to non-medical outdoor and indoor images, the number of medical images required to train large CNN models is found to be significantly lesser than the number of non-medical images.

The work in [12] and the OCT code base demonstrates retraining the ResNet layer for transfer learning and classification of test images. The parameters to be tuned here include optimizer, learning rate, size of input images, and number of dense layers at the end of the ResNet layer.

Results: For the ResNet model test accuracy can vary between 94–99% by varying the number of training images as shown in [12]. Fig 3. qualitatively demonstrates the performance of the classification model.

Fig. 3: Regions of interest (ROIs) for each pathology superimposed on the original image using the Gradcam library in python. (Image by author)

These visualizations are produced using the Gradcam library that combines the CNN layer activations onto the original image to understand the regions of interest, or automatically detected features of importance, for the classification task. Usage of Gradcam using the tf_explain library is shown below.

Project 3: AI Explainability for Multi-label Image Classification (Level: Easy)

Goal: CNN models enable end-to-end delivery, which means there is no need to engineer and rank features for classification and the model outcome is the desired process outcome. However, it is often important to visualize and explain CNN model performances as shown in later parts of Project 2.

Some well-known visualization and explainability libraries are tf_explain and Local Interpretable Model-Agnostic Explanations (LIME). In this project, the goal is to achieve multi-label classification and explain what the CNN model is seeing as features to classify images in a particular way. In this case, we consider a multi-label scenario wherein one image can contain multiple objects, for example cat and a dog in Colab for LIME.

Here, the input is images with cat and dog in it and the goal is to identify which regions correspond to a cat or dog respectively.

Method: In this project, each image is subjected to super-pixel segmentation that divides the image into several sub-regions with similar pixel color and texture characteristics. The number of divided sub-regions can be manually provided as a parameter. Next, the InceptionV3 model is invoked to assign a probability to each superpixel sub-region to belong to one of the 1000 classes that InceptionV3 is originally trained on. Finally, the object probabilities are used as weights to fit a regression model that explains the ROIs corresponding to each class as shown in Fig. 4 and code below.

Fig 4: Explainability of image super-pixels using regression-like models in LIME. (Image by author)

Results: Using the proposed method, ROIs in most non-medical images should be explainable. Qualitative assessment and explainability as shown here are specifically useful in corner cases, or in situations where the model misclassified or missed objects of interest. In such situations, explaining what the CNN model is looking at and boosting ROIs accordingly to correct overall classification performances can help significantly reduce data-induced biases.

Project 4: Transfer learning for 2D Bounding box detection on new objects (Level: Medium)

Goal: The next step after image classification is the detection of objects of interest by placing bounding boxes around them. This is a significant problem in the autonomous drive domain to accurately identify moving objects such as cars and pedestrians from static objects such as roadblocks, street signs, trees, and buildings.

The major difference between this project and the prior projects is the format of data. Here, labels Y are typically in the form of [x,y,w,h] per object of interest, where (x,y) typically represent the top left corner of the bounding box and w and h correspond to the width and height of the output bounding box. In this project, the goal is to leverage a pre-trained classifier for its feature extraction capabilities and then to retrain it on a small set of images to create a tight bounding box around a new object.

Method: In the code Bounding Box colab, we can extend a pre-trained object detector such as a single shot detector (SSD) with Resnet50 skip connections and feature pyramid network backbone, that is pre-trained for object detection on the MS-COCO dataset [13] to detect a completely unseen new object category, a rubber duck in this case.

In this transfer learning setup, the already learned weights from early layers of the object detector are useful to extract local structural and textural information from images and only the final classifier layer requires retraining for the new object class. This enables retraining the object detector for a new class, such as a rubber duck in this use case, using as few as 5–15 images of the new object. The parameters to be tuned include optimizer, learning rate, input image size, and number of neurons in the final classifier layer.

Results: One major difference between object detectors and the prior CNN-based classifier models shown above is an additional output metric called Intersection over Union (IoU) [11] that measures the extent of overlap between the actual bounding box and the predicted bounding box. Additionally, an object detector model typically consists of a classifier (that predicts the object class) and a bounding box regressor that predicts the dimensions of the bounding box around the object. An example of the Google API for object detection on a new unseen image is shown in Fig. 5 and code below.

Extensions of the 2D bounding box detector to 3D bounding boxes specifically for autonomous drive are shown in these projects.

Fig 5: Example of 2D bounding box detection using the tensorflow api for object detection
Fig 5: Example of 2D bounding box detection using the TensorFlow API for object detection

Project 5: Personalized Medicine and Explainability (Level: Medium)

Goal: In this project, the goal is to automatically segment ROIs from multiple pathology sites to classify the extent of anemia-like pallor in a patient and track the pallor over time [13]. The two major differences in this project from the previous ones is that: 1) pallor needs to be detected across multiple image sites such as conjunctiva (under eye) and tongue to predict a single label as shown in Fig. 6, 2) ROIs corresponding to pallor need to be displayed and tracked over time.

Fig 6: Example of anemia-like pallor detection using images processed from multiple pathological sites. (Image by author)

Methods: For this project, feature-based models and CNN-based classifiers are applied with heavy data augmentation using the Imagedata generator in Keras. To fuse the outcomes from multiple pathology sites, early, mid and late fusion can be applied.

The work in [13] applies late fusion wherein the layer before the classifier, which is considered to be the optimal feature representation of the image, is used to fuse features across multiple pathological sites. Finally, the Deepdream algorithm, as shown in the Deepdream Colab, is applied to the original eye and tongue images to visualize the ROIs and explain the extent of pathology. The parameters to tune in this project include the parameters from Project 2 along with additive gradient factor for the Deepdream visualizations.

Results: The data for this work is available for benchmarking. Using the Deepdream algorithm the visualizations are shown in Fig. 7, where, we observe a higher concentration of features corresponding to pallor in the blood vessels under-eye than anywhere else in the eye. Similarly, we observe differences in features between the inner and outer segments of the tongue. These assessments are useful to create a personalized pathology tracking system for patients with anemia.

Fig 7: Example of feature concentrations from the Deep Dream implementation. Heavy concentration of gradients is observed in the conjunctive or under-eye blood vessel regions. (Image by author)

Project 6: Point cloud segmentation for object detection. (Level: Hard)

Goal: In this project, the input is a stream of point clouds, i.e., the output from Lidar sensors that provide depth resolution. The primary difference between Lidar point clouds and an image is that point clouds provide 3D resolution, so each voxel (3D equivalent of pixel) represents the location of an object from the Lidar source and height of the object relative to the Lidar source. The main challenges posed by point cloud data models are i) model computational complexity if 3D convolutions are used and ii) object transformation invariance, which means a rotated object should be detected as the object itself as shown in [13].

Method: The data set for this project is the ModelNet40 shape classification benchmark that contains over 12,000, 3D models from 40 object classes. Each object is sub-sampled to extract a fixed number of points followed by augmentation to cater to multiple transformations in shape. Next 1D convolutions are used to learn the shapeness features using the Pytorch library in the Pointnet colab as shown below.

Results: The outcome of the model can be summarized using Fig. 8 below. Up to 89% training accuracy for object classification can be achieved by this method that can also be extended to 3D semantic segmentation. Extensions to this work can be useful for 3D bounding box detection for autonomous drive use cases.

Fig. 8: Image from [15] that identifies objects from the point clouds

Project 7: Image semantic segmentation using U-net for binary and multi-class. (Medium)

Goal: The CNN models so far have been applied to automatically learn features that can then be used for classification. This process is known as feature encoding. As a next step, we apply a decoder unit with similar structure as the encoder to enable generation of an output image. This combination of encoder-decoder pair enables the input and output to have similar dimensions, i.e. input is an image and output is also an image.

Methods: The encoder-decoder combination with residual skip connections is popularly known as the U-net [15]. For binary and multi-class problems, the data has to be formatted such that if X (input image) has dimensions [m x m] pixels, Y has dimensions [m x m x d], where ‘d’ is the number of classes to be predicted. The parameters to tune include optimizer, learning rate, and depth of the U-net model as shown in [15] and Fig. 9 below (source here).

Fig. 9. Example of U-net model.

Results: The U-net model can learn to generate binary and multi-class semantic maps from large and small data sets [16–17], but it is found to be sensitive to data imbalance. Thus, selecting the right training data set is significantly important for optimal outcomes. Other extensions to this work would include DenseNet connections to the model, or other encoder-decoder networks such as MobileNet or Exception networks [17].

Project 8: Machine Translation for Posture and Intention Classification (Level: Hard)

Goal: Automated detection of posture or gesture often includes keypoint identification (such as identification of the skeletal structure) in videos that can lead to identification of posture (standing, walking, moving) or intention for pedestrians (crossing road, not crossing), etc. [18–19], as shown in Fig. 10 below. For this category of problems, keyframe information from multiple subsequent video frames is processed collectively to generate pose/intention-related predictions.

Covid 19

Amazon’s new algorithm will spread workers’ duties across their muscle-tendon groups

Amazon is often accused of treating staff like expendable robots. And like machines, sometimes employees break down.

A 2019 survey of 145 warehouse workers found that 65% of them experienced physical pain while doing their jobs. Almost half (42%) said the pain persists even when they’re not working.

That’s a real shame — for Amazon. Those pesky injuries can slow down the company’s relentless pace of work.

But the e-commerce giant may have found a solution. Is it a more humane workload? Upgraded workers’ rights? Of course not. It’s an algorithm that switches staff around tasks that use different body parts.

[Read: The biggest tech trends of 2021, according to 3 founders]

Jeff Bezos unveiled the system in his final letter as CEO to Amazon shareholders:

We’re developing new automated staffing schedules that use sophisticated algorithms to rotate employees among jobs that use different muscle-tendon groups to decrease repetitive motion and help protect employees from MSD [musculoskeletal disorder] risks. This new technology is central to a job rotation program that we’re rolling out throughout 2021.

The world’s richest man added that Amazon has already reduced these injuries.

He said that muskoskeletal disorders at the company dropped by 32% from 2019 to 2020. More importantly, MSDs resulting in time off work decreased by more than half.

Those claims probably shouldn’t be taken at face value, however. Last year, an investigation found that Amazon had misled the public about workplace injury rates.

Bezos did acknowledge that the company still needs to do a better job for employees. But he disputed claims that employees are “treated as robots.”

Employees are able to take informal breaks throughout their shifts to stretch, get water, use the restroom, or talk to a manager, all without impacting their performance.

He added that Amazon is going to be “Earth’s Best Employer” and “Earth’s Safest Place to Work.” Reports of staff peeing in bottles, shocking injury ratesunion-bustinginvasive surveillance, and impossible performance targets suggest he’s got a long way to go.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Covid 19

How a theoretical mouse could crack the stock market

A team of physicists at Emory University recently published research indicating they’d successfully managed to reduce a mouse’s brain activity to a simple predictive model. This could be a breakthrough for artificial neural networks. You know: robot brains.

Let there be mice: Scientists can do miraculous things with mice such as grow a human ear on one’s back or control one via computer mouse. But this is the first time we’ve heard of researchers using machine learning techniques to grow a theoretical mouse brain.

Per a press release from Emory University:

The dynamics of the neural activity of a mouse brain behave in a peculiar, unexpected way that can be theoretically modeled without any fine tuning.

In other words: We can observe a mouse’s brain activity in real-time, but there are simply too many neuronal interactions for us to measure and quantify each and every one – even with AI. So the scientists are using the equivalent of a math trick to make things simpler.

How’s it work? The research is based on a theory of criticality in neural networks. Basically, all the neurons in your brain exist in an equilibrium between chaos and order. They don’t all do the same thing, but they also aren’t bouncing around randomly.

The researchers believe the brain operates in this balance in much the same way other state-transitioning systems do. Water, for example, can change from gas to liquid to solid. And, at some point during each transition, it achieves a criticality where its molecules are in either both states or neither.

[Read: The biggest tech trends of 2021, according to 3 founders]

The researchers hypothesized that brains, organic neural networks, function under the same hypothetical balance state. So, they ran a bunch of tests on mice as they navigated mazes in order to establish a database of brain data.

Next, the team went to work developing a working, simplified model that could predict neuron interactions using the experimental data as a target. According to their research paper, their model is accurate to within a few percentage points.

What’s it mean? This is early work but, there’s a reason why scientists use mice brains for this kind of research: because they’re not so different from us. If you can reduce what goes on in a mouse’s head to a working AI model, then it’s likely you can eventually scale that to human-brain levels.

On the conservative side of things, this could lead to much more robust deep learning solutions. Our current neural networks are a pale attempt to imitate what nature does with ease. But the Emory team’s mouse models could represent a turning point in robustness, especially in areas where a model is likely to be affected by outside factors.

This could, potentially, include stronger AI inferences where diversity is concerned and increased resilience against bias. And other predictive systems could benefit as well, such as stock market prediction algorithms and financial tracking models. It’s possible this could even increase our ability to predict weather patterns over long periods of time.

Quick take: This is brilliant, but it’s actual usefulness remains to be seen. Ironically, the tech and AI industries are also at a weird, unpredictable point of criticality where brute-force hardware solutions and elegant software shortcuts are starting to pull away from each other.

Still, if we take a highly optimistic view, this could also be the start of something amazing such as artificial general intelligence (AGI) – machines that actually think. No matter how we arrive at AGI, it’s likely we’ll need to begin with models capable of imitating nature’s organic neural nets as closely as possible. You got to start somewhere.

Covid 19

Instagram apologizes after algorithm promotes harmful diet content to people with eating disorders

Instagram has apologized for a “mistake” that led its algorithm to promote harmful diet content to people with eating disorders.

The social network had been automatically recommending terms including “appetite suppressant,” which campaigners feared could lead vulnerable people to relapse.

Facebook, which owns Instagram, told the BBC that the suggestions were triggered by a new search functionality:

As part of this new feature, when you tap on the search bar, we’ll suggest topics you may want to search for. Those suggestions, as well as the search results themselves, are limited to general interests, and weight loss should not have been one of them.

The company said the harmful terms have now been removed and the issue with the search feature has been resolved.

[Read: The biggest tech trends of 2021, according to 3 founders]

Content that promotes disorders is banned from Instagram, while posts advertising weight-loss products are supposed to be hidden from users known to be under 18.

However, the error shows that policies alone can’t control the platform’s algorithms.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Covid 19

Billionaire clown Elon Musk drags the late Chris Farley into Tesla’s feud with Ford

There’s never a dull moment in tech. Elon Musk and Ford CEO Jim Farley got into it on Twitter yesterday after a new Ford advertisement seemingly tossed shade at Tesla’s Autopilot.

Heads up: The real lead here is that Ford’s new BlueCruise kit, a driverless car system, will launch on certain Mustang and F-150 models. Can we all take a moment to recognize how awesome the idea of an autonomous Mustang in the future is?

But: Elon being Elon, there was no way the news was ever going to be about anything other than him.

Ford CEO Jim Farley apparently couldn’t resist trolling Tesla a bit when he tweeted that his company tested BlueCruise “in the real world, so our customers don’t have to.” This has been interpreted to be a jab at Tesla’s simulation-based training methods.

Musk responded (to a tweet featuring the quote) by invoking Farley’s cousin, the late Chris Farley. Yes, that Chris Farley:

Many on Twitter found the reply innocuous and good-natured, others saw it as over-the-top and disrespectful. It’s generally considered impolite to use a clip of someone’s deceased relative to troll them on social media.

Here’s the thing: It’s macabre for Musk and Farley to joke about training driverless cars. Autopilot failures have been a contributing factor in numerous accidents involving Tesla vehicles, some of which were fatal.

There is currently no production vehicle on the market that can drive itself safely and/or legally. We’ve seen the videos and the fact remains: level two autonomy is not self-driving.

Tesla’s “Autopilot” and “Full Self Driving” systems are not capable of auto-piloting or self-driving. Full stop.

[Read: The biggest tech trends of 2021, according to 3 founders]

This kind of rhetoric, two childish CEOs bantering about the abilities of their vehicles, gives consumers a bloated view of what these cars are capable of. Whether consumers think Ford’s built something that’s better than “Autopilot,” or that Tesla already has things figured out – it seems the reality of level two autonomy is getting lost in the hype.

The bottom line: The technology powering these vehicles is amazing but, at the end of the day it’s just glorified cruise control. Drivers are meant to keep their hands on the wheel and their eyes on the road at all times when operating any current production vehicle, whether its so-called self-driving features are engaged or not.

When these companies and their CEOs engage in back-and-forth on Twitter, they’re taking a calculated risk that consumers will buy into the rivalry and enjoy the capitalist competition as it plays out for their amusement.

They take the same kind of calculated risk when they continue marketing their products as “self-driving” features even after customers keep overestimating their abilities and dying.

Covid 19

Study: People trust the algorithm more than each other

Our daily lives are run by algorithms. Whether we’re shopping online, deciding what to watch, booking a flight, or just trying to get across town, artificial intelligence is involved. It’s safe to say we rely on algorithms, but do we actually trust them?

Up front: Yes. We do. A trio of researchers from the University of Georgia recently conducted a study to determine whether humans are more likely to trust an answer they believe was generated by an algorithm or crowd-sourced from humans.

The results indicated that humans were more likely to trust algorithms when problems become to complex for them to trust their own answers.

Background: We all know that, to some degree or another, we’re beholden to the algorithm. We tend to trust that Spotify and Netflix know how to entertain us. So it’s not surprising that humans would choose answers based on the sole distinction that they’ve been labeled as being computer-generated.

But the interesting part isn’t that we trust machines, it’s that we trust them when we probably shouldn’t.

How it works: The researchers tapped 1,500 participants for the study. Participants were asked to look at a series of images and determine how many people were in each image. As the number of people in the image increased, humans gained less confidence in their answers and were offered the ability to align their responses with either crowd-sourced answers from a group of thousands of people, or answers they were told had been generated by an algorithm.

Per the study:

In three preregistered online experiments, we found that people rely more on algorithmic advice relative to social influence as tasks become more difficult. All three experiments focused on an intellective task with a correct answer and found that subjects relied more on algorithmic advice as difficulty increased. This effect persisted even after controlling for the quality of the advice, the numeracy and accuracy of the subjects, and whether subjects were exposed to only one source of advice, or both sources.

The problem here is that AI isn’t very well suited for a task such as counting the number of humans in an image. It may sound like a problem built for a computer – it’s math-based, after all – but the fact of the matter is that AI often struggles to identify objects in images especially when there aren’t clear lines of separation between objects of the same type.

Quick take: The research indicates the general public is probably a little confused about what AI can do. Algorithms are getting stronger and AI has become an important facet of our everyday lives, but it’s never a good sign when the average person seems to believe a given answer is better just because they think it was generated by an algorithm.

Covid 19

EU commission to take hard-line stance against ‘high-risk’ AI

The European Commission is set to unveil a new set of regulations for artificial intelligence products. While some AI tech would be outright banned, other potentially harmful systems would be forced through a vetting process before developers could release them to the general public.

The proposed legislation, per a leak obtained by Politico’s Melissa Heikkila, would ban systems it deems as “contravening the Union values or violating fundamental rights.”

The regulations, if passed, could limit the potential harm done by AI-powered systems involved in “high-risk” areas of operation such as facial recognition, and social credit systems.

Per an EU statement:

This proposal will aim to safeguard fundamental EU values and rights and user safety by obliging high-risk AI systems to meet mandatory requirements related to their trustworthiness. For example, ensuring there is human oversight, and clear information on the capabilities and limitations of AI.

The commission’s anticipated legislation comes after years of research internally and with third-party groups, including a 2019 white paper detailing the EU’s ethical guidelines for responsible AI.

It’s unclear at this time exactly when such legislation would pass, the EU’s only given it a “2021” time frame.

Also unclear: exactly what this will mean for European artificial intelligence startups and research teams. It’ll be interesting to see exactly how development bans will play out, especially considering no such regulation exists in the US, China, or Russia.

The regulation is clearly aimed at big tech companies and medium-sized AI startups that specialize in controversial AI tech such as facial recognition. But, even with the leaked proposal, there’s still little in the way of information as to how the EU plans to enforce these regulations or exactly how systems will be vetted.