Covid 19

Here’s why we should never trust AI to identify our emotions

Imagine you are in a job interview. As you answer the recruiter’s questions, an artificial intelligence (AI) system scans your face, scoring you for nervousness, empathy and dependability. It may sound like science fiction, but these systems are increasingly used, often without people’s knowledge or consent.

Emotion recognition technology (ERT) is in fact a burgeoning multi-billion-dollar industry that aims to use AI to detect emotions from facial expressions. Yet the science behind emotion recognition systems is controversial: there are biases built into the systems.

[Read: Can AI read your emotions? Try it for yourself]

Many companies use ERT to test customer reactions to their products, from cereal to video games. But it can also be used in situations with much higher stakes, such as in hiring, by airport security to flag faces as revealing deception or fear, in border control, in policing to identify “dangerous people” or in education to monitor students’ engagement with their homework.

Shaky scientific ground

Fortunately, facial recognition technology is receiving public attention. The award-winning film Coded Bias, recently released on Netflix, documents the discovery that many facial recognition technologies do not accurately detect darker-skinned faces. And the research team managing ImageNet, one of the largest and most important datasets used to train facial recognition, was recently forced to blur 1.5 million images in response to privacy concerns.

Revelations about algorithmic bias and discriminatory datasets in facial recognition technology have led large technology companies, including Microsoft, Amazon and IBM, to halt sales. And the technology faces legal challenges regarding its use in policing in the UK. In the EU, a coalition of more than 40 civil society organisations have called for a ban on facial recognition technology entirely.

Like other forms of facial recognition, ERT raises questions about bias, privacy and mass surveillance. But ERT raises another concern: the science of emotion behind it is controversial. Most ERT is based on the theory of “basic emotions” which holds that emotions are biologically hard-wired and expressed in the same way by people everywhere.

This is increasingly being challenged, however. Research in anthropology shows that emotions are expressed differently across cultures and societies. In 2019, the Association for Psychological Science conducted a review of the evidence, concluding that there is no scientific support for the common assumption that a person’s emotional state can be readily inferred from their facial movements. In short, ERT is built on shaky scientific ground.

Also, like other forms of facial recognition technology, ERT is encoded with racial bias. A study has shown that systems consistently read black people’s faces as angrier than white people’s faces, regardless of the person’s expression. Although the study of racial bias in ERT is small, racial bias in other forms of facial recognition is well-documented.

There are two ways that this technology can hurt people, says AI researcher Deborah Raji in an interview with MIT Technology Review: “One way is by not working: by virtue of having higher error rates for people of color, it puts them at greater risk. The second situation is when it does work — where you have the perfect facial recognition system, but it’s easily weaponized against communities to harass them.”

So even if facial recognition technology can be de-biased and accurate for all people, it still may not be fair or just. We see these disparate effects when facial recognition technology is used in policing and judicial systems that are already discriminatory and harmful to people of colour. Technologies can be dangerous when they don’t work as they should. And they can also be dangerous when they work perfectly in an imperfect world.

The challenges raised by facial recognition technologies – including ERT – do not have easy or clear answers. Solving the problems presented by ERT requires moving from AI ethics centred on abstract principles to AI ethics centred on practice and effects on people’s lives.

When it comes to ERT, we need to collectively examine the controversial science of emotion built into these systems and analyse their potential for racial bias. And we need to ask ourselves: even if ERT could be engineered to accurately read everyone’s inner feelings, do we want such intimate surveillance in our lives? These are questions that require everyone’s deliberation, input and action.

Citizen science project

ERT has the potential to affect the lives of millions of people, yet there has been little public deliberation about how – and if – it should be used. This is why we have developed a citizen science project.

On our interactive website (which works best on a laptop, not a phone) you can try out a private and secure ERT for yourself, to see how it scans your face and interprets your emotions. You can also play games comparing human versus AI skills in emotion recognition and learn about the controversial science of emotion behind ERT.

Most importantly, you can contribute your perspectives and ideas to generate new knowledge about the potential impacts of ERT. As the computer scientist and digital activist Joy Buolamwinisays: “If you have a face, you have a place in the conversation.”The Conversation

This article by Alexa Hagerty, Research Associate of Anthropology, University of Cambridge and Alexandra Albert, Research Fellow in Citizen Social Science, UCL, is republished from The Conversation under a Creative Commons license. Read the original article.

Covid 19

New to computer vision and medical imaging? Start with these 10 projects

(AI) and computer science that enables automated systems to see, i.e. to process images and video in a human-like manner to detect and identify objects or regions of importance, predict an outcome or even alter the image to a desired format [1]. Most popular use cases in the CV domain include automated perception for autonomous drive, augmented and virtual realities (AR, VR) for simulations, games, glasses, reality, and fashion or beauty-oriented e-commerce.

Medical image (MI) processing on the other hand involves much more detailed analysis of medical images that are typically grayscale such as MRI, CT, or X-ray images for automated pathology detection, a task that requires a trained specialist’s eye for detection. Most popular use cases in the MI domain include automated pathology labeling, localization, association with treatment or prognostics, and personalized medicine.

Prior to the advent of deep learning methods, 2D signal processing solutions such as image filtering, wavelet transforms, image registration, followed by classification models [2–3] were heavily applied for solution frameworks. Signal processing solutions still continue to be the top choice for model baselining owing to their low latency and high generalizability across data sets.

However, deep learning solutions and frameworks have emerged as a new favorite owing to the end-to-end nature that eliminates the need for feature engineering, feature selection and output thresholding altogether. In this tutorial, we will review “Top 10” project choices for beginners in the fields of CV and MI and provide examples with data and starter code to aid self-paced learning.

CV and MI solution frameworks can be analyzed in three segments: Data, Process, and Outcomes [4]. It is important to always visualize the data required for such solution frameworks to have the format “{X,Y}”, where X represents the image/video data and Y represents the data target or labels. While naturally occurring unlabelled images and video sequences (X) can be plentiful, acquiring accurate labels (Y) can be an expensive process. With the advent of several data annotation platforms such as [5–7], images and videos can be labeled for each use case.

Since deep learning models typically rely on large volumes of annotated data to automatically learn features for subsequent detection tasks, the CV and MI domains often suffer from the “small data challenge”, wherein the number of samples available for training a machine learning model is several orders lesser than the number of model parameters.

The “small data challenge” if unaddressed can lead to overfit or underfit models that may not generalize to new unseen test data sets. Thus, the process of designing a solution framework for CV and MI domains must always include model complexity constraints, wherein models with fewer parameters are typically preferred to prevent model underfitting.

Finally, the solution framework outcomes are analyzed both qualitatively through visualization solutions and quantitatively in terms of well-known metrics such as precision, recall, accuracy, and F1 or Dice coefficients [8–9].

The projects listed below present a variety in difficulty levels (difficulty levels Easy, Medium, Hard) with respect to data pre-processing and model building. Also, these projects represent a variety of use cases that are currently prevailing in the research and engineering communities. The projects are defined in terms of the: Goal, Methods, and Results.

Project 1: MNIST and Fashion MNIST for Image Classification (Level: Easy)

Goal: To process images (X) of size [28×28] pixels and classify them into one of the 10 output categories (Y). For the MNIST data set, the input images are handwritten digits in the range 0 to 9 [10]. The training and test data sets contain 60,000 and 10,000 labeled images, respectively. Inspired by the handwritten digit recognition problem, another data set called the Fashion MNIST data set was launched [11] where the goal is to classify images (of size [28×28]) into clothing categories as shown in Fig. 1.

Fig 1: The MNIST and Fashion MNIST data sets with 10 output categories each. (Image by Author)

Methods: When the input image is small ([28×28] pixels) and images are grayscale, convolutional neural network (CNN) models, where the number of convolutional layers can vary from single to several layers are suitable classification models. An example of MNIST classification model build using Keras is presented in the colab file:

MNIST colab file

Another example of classification on the Fashion MNIST data set is shown in:

Fashion MNIST Colab file

In both instances, the key parameters to tune include the number of layers, dropout, optimizer (Adaptive optimizers preferred), learning rate, and kernel size as seen in the code below. Since this is a multi-class problem, the ‘softmax’ activation function is used in the final layer to ensure only 1 output neuron gets weighted more than the others.

Results: As the number of convolutional layers increases from 1–10, the classification accuracy is found to increase as well. The MNIST data set is well studied in literature with test accuracies in the range of 96–99%. For the Fashion MNIST data set, test accuracies are typically in the range 90–96%. An example of visualization of the MNIST classification outcome using CNN models is shown in Fig 2 below (See visualization at front end here).

Fig. 2: Example of visualizing the outcome of CNN model for MNIST data. The input is shown in the top left corner and the respective layer activations are shown. The final result is between 5 and 8.

Project 2: Pathology Classification for Medical Images (Level: Easy)

Goal: To classify medical images (acquired using Optical Coherence Tomography, OCT) as Normal, Diabetic Macular Edema (DME), Drusen, choroidal neovascularization (CNV) as shown in [12]. The data set contains about 84,000 training images and about 1,000 test images with labels and each image has a width of 800 to 1,000 pixels as shown in Fig 2.

Fig 2: Examples of OCT images from the Kaggle Dataset in [12].

Methods: Deep CNN models such as Resnet and CapsuleNet [12] have been applied to classify this data set. The data needs to be resized to [512×512] or [256×256] to be fed to standard classification models. Since medical images have lesser variations in object categories per image frame when compared to non-medical outdoor and indoor images, the number of medical images required to train large CNN models is found to be significantly lesser than the number of non-medical images.

The work in [12] and the OCT code base demonstrates retraining the ResNet layer for transfer learning and classification of test images. The parameters to be tuned here include optimizer, learning rate, size of input images, and number of dense layers at the end of the ResNet layer.

Results: For the ResNet model test accuracy can vary between 94–99% by varying the number of training images as shown in [12]. Fig 3. qualitatively demonstrates the performance of the classification model.

Fig. 3: Regions of interest (ROIs) for each pathology superimposed on the original image using the Gradcam library in python. (Image by author)

These visualizations are produced using the Gradcam library that combines the CNN layer activations onto the original image to understand the regions of interest, or automatically detected features of importance, for the classification task. Usage of Gradcam using the tf_explain library is shown below.

Project 3: AI Explainability for Multi-label Image Classification (Level: Easy)

Goal: CNN models enable end-to-end delivery, which means there is no need to engineer and rank features for classification and the model outcome is the desired process outcome. However, it is often important to visualize and explain CNN model performances as shown in later parts of Project 2.

Some well-known visualization and explainability libraries are tf_explain and Local Interpretable Model-Agnostic Explanations (LIME). In this project, the goal is to achieve multi-label classification and explain what the CNN model is seeing as features to classify images in a particular way. In this case, we consider a multi-label scenario wherein one image can contain multiple objects, for example cat and a dog in Colab for LIME.

Here, the input is images with cat and dog in it and the goal is to identify which regions correspond to a cat or dog respectively.

Method: In this project, each image is subjected to super-pixel segmentation that divides the image into several sub-regions with similar pixel color and texture characteristics. The number of divided sub-regions can be manually provided as a parameter. Next, the InceptionV3 model is invoked to assign a probability to each superpixel sub-region to belong to one of the 1000 classes that InceptionV3 is originally trained on. Finally, the object probabilities are used as weights to fit a regression model that explains the ROIs corresponding to each class as shown in Fig. 4 and code below.

Fig 4: Explainability of image super-pixels using regression-like models in LIME. (Image by author)

Results: Using the proposed method, ROIs in most non-medical images should be explainable. Qualitative assessment and explainability as shown here are specifically useful in corner cases, or in situations where the model misclassified or missed objects of interest. In such situations, explaining what the CNN model is looking at and boosting ROIs accordingly to correct overall classification performances can help significantly reduce data-induced biases.

Project 4: Transfer learning for 2D Bounding box detection on new objects (Level: Medium)

Goal: The next step after image classification is the detection of objects of interest by placing bounding boxes around them. This is a significant problem in the autonomous drive domain to accurately identify moving objects such as cars and pedestrians from static objects such as roadblocks, street signs, trees, and buildings.

The major difference between this project and the prior projects is the format of data. Here, labels Y are typically in the form of [x,y,w,h] per object of interest, where (x,y) typically represent the top left corner of the bounding box and w and h correspond to the width and height of the output bounding box. In this project, the goal is to leverage a pre-trained classifier for its feature extraction capabilities and then to retrain it on a small set of images to create a tight bounding box around a new object.

Method: In the code Bounding Box colab, we can extend a pre-trained object detector such as a single shot detector (SSD) with Resnet50 skip connections and feature pyramid network backbone, that is pre-trained for object detection on the MS-COCO dataset [13] to detect a completely unseen new object category, a rubber duck in this case.

In this transfer learning setup, the already learned weights from early layers of the object detector are useful to extract local structural and textural information from images and only the final classifier layer requires retraining for the new object class. This enables retraining the object detector for a new class, such as a rubber duck in this use case, using as few as 5–15 images of the new object. The parameters to be tuned include optimizer, learning rate, input image size, and number of neurons in the final classifier layer.

Results: One major difference between object detectors and the prior CNN-based classifier models shown above is an additional output metric called Intersection over Union (IoU) [11] that measures the extent of overlap between the actual bounding box and the predicted bounding box. Additionally, an object detector model typically consists of a classifier (that predicts the object class) and a bounding box regressor that predicts the dimensions of the bounding box around the object. An example of the Google API for object detection on a new unseen image is shown in Fig. 5 and code below.

Extensions of the 2D bounding box detector to 3D bounding boxes specifically for autonomous drive are shown in these projects.

Fig 5: Example of 2D bounding box detection using the tensorflow api for object detection
Fig 5: Example of 2D bounding box detection using the TensorFlow API for object detection

Project 5: Personalized Medicine and Explainability (Level: Medium)

Goal: In this project, the goal is to automatically segment ROIs from multiple pathology sites to classify the extent of anemia-like pallor in a patient and track the pallor over time [13]. The two major differences in this project from the previous ones is that: 1) pallor needs to be detected across multiple image sites such as conjunctiva (under eye) and tongue to predict a single label as shown in Fig. 6, 2) ROIs corresponding to pallor need to be displayed and tracked over time.

Fig 6: Example of anemia-like pallor detection using images processed from multiple pathological sites. (Image by author)

Methods: For this project, feature-based models and CNN-based classifiers are applied with heavy data augmentation using the Imagedata generator in Keras. To fuse the outcomes from multiple pathology sites, early, mid and late fusion can be applied.

The work in [13] applies late fusion wherein the layer before the classifier, which is considered to be the optimal feature representation of the image, is used to fuse features across multiple pathological sites. Finally, the Deepdream algorithm, as shown in the Deepdream Colab, is applied to the original eye and tongue images to visualize the ROIs and explain the extent of pathology. The parameters to tune in this project include the parameters from Project 2 along with additive gradient factor for the Deepdream visualizations.

Results: The data for this work is available for benchmarking. Using the Deepdream algorithm the visualizations are shown in Fig. 7, where, we observe a higher concentration of features corresponding to pallor in the blood vessels under-eye than anywhere else in the eye. Similarly, we observe differences in features between the inner and outer segments of the tongue. These assessments are useful to create a personalized pathology tracking system for patients with anemia.

Fig 7: Example of feature concentrations from the Deep Dream implementation. Heavy concentration of gradients is observed in the conjunctive or under-eye blood vessel regions. (Image by author)

Project 6: Point cloud segmentation for object detection. (Level: Hard)

Goal: In this project, the input is a stream of point clouds, i.e., the output from Lidar sensors that provide depth resolution. The primary difference between Lidar point clouds and an image is that point clouds provide 3D resolution, so each voxel (3D equivalent of pixel) represents the location of an object from the Lidar source and height of the object relative to the Lidar source. The main challenges posed by point cloud data models are i) model computational complexity if 3D convolutions are used and ii) object transformation invariance, which means a rotated object should be detected as the object itself as shown in [13].

Method: The data set for this project is the ModelNet40 shape classification benchmark that contains over 12,000, 3D models from 40 object classes. Each object is sub-sampled to extract a fixed number of points followed by augmentation to cater to multiple transformations in shape. Next 1D convolutions are used to learn the shapeness features using the Pytorch library in the Pointnet colab as shown below.

Results: The outcome of the model can be summarized using Fig. 8 below. Up to 89% training accuracy for object classification can be achieved by this method that can also be extended to 3D semantic segmentation. Extensions to this work can be useful for 3D bounding box detection for autonomous drive use cases.

Fig. 8: Image from [15] that identifies objects from the point clouds

Project 7: Image semantic segmentation using U-net for binary and multi-class. (Medium)

Goal: The CNN models so far have been applied to automatically learn features that can then be used for classification. This process is known as feature encoding. As a next step, we apply a decoder unit with similar structure as the encoder to enable generation of an output image. This combination of encoder-decoder pair enables the input and output to have similar dimensions, i.e. input is an image and output is also an image.

Methods: The encoder-decoder combination with residual skip connections is popularly known as the U-net [15]. For binary and multi-class problems, the data has to be formatted such that if X (input image) has dimensions [m x m] pixels, Y has dimensions [m x m x d], where ‘d’ is the number of classes to be predicted. The parameters to tune include optimizer, learning rate, and depth of the U-net model as shown in [15] and Fig. 9 below (source here).

Fig. 9. Example of U-net model.

Results: The U-net model can learn to generate binary and multi-class semantic maps from large and small data sets [16–17], but it is found to be sensitive to data imbalance. Thus, selecting the right training data set is significantly important for optimal outcomes. Other extensions to this work would include DenseNet connections to the model, or other encoder-decoder networks such as MobileNet or Exception networks [17].

Project 8: Machine Translation for Posture and Intention Classification (Level: Hard)

Goal: Automated detection of posture or gesture often includes keypoint identification (such as identification of the skeletal structure) in videos that can lead to identification of posture (standing, walking, moving) or intention for pedestrians (crossing road, not crossing), etc. [18–19], as shown in Fig. 10 below. For this category of problems, keyframe information from multiple subsequent video frames is processed collectively to generate pose/intention-related predictions.

Covid 19

Big Tech is pushing states to pass privacy laws, and yes, you should be suspicious

Concerned about growing momentum behind efforts to regulate the commercial use of personal data, Big Tech has begun seeding watered-down “privacy” legislation in states with the goal of preempting greater protections, experts say.

The swift passage in March of a consumer data privacy law in Virginia, which Protocol reported was originally authored by Amazon with input from Microsoft, is emblematic of an industry-driven, lobbying-fueled approach taking hold across the country. The Markup reviewed existing and proposed legislation, committee testimony, and lobbying records in more than 20 states and identified 14 states with privacy bills built upon the same industry-backed framework as Virginia’s, or with weaker models. The bills are backed by a who’s who of Big Tech–funded interest groups and are being shepherded through statehouses by waves of company lobbyists.

Credit: IAPP
Data sourced from IAPP

Meanwhile, the small handful of bills that have not adhered to two key industry demands—that companies can’t be sued for violations and consumers would have to opt out of rather than into tracking—have quickly died in committee or been rewritten.

Experts say Big Tech’s push to pass friendly state privacy bills ramped up after California enacted sweeping privacy bills in 2018 and 2020—and that the ultimate goal is to prompt federal legislation that would potentially override California’s privacy protections.

“The effort to push through weaker bills is to demonstrate to businesses and to Congress that there are weaker options,” said Ashkan Soltani, a former chief technologist for the Federal Trade Commission who helped author the California legislation. “Nobody saw Virginia coming. That was very much an industry-led effort by Microsoft and Amazon. At some point, if multiple states go the way of Virginia, you might not even get companies to honor California’s [rules].”

California’s laws, portions of which don’t go into effect until 2023, create what is known as a “global opt out.” Rather than every website requiring users to go through separate opt-out processes, residents can use internet browsers and extensions that automatically notify every website that a user wishes to opt out of the sale of their personal data or use of it for targeted advertising—and companies must comply. The laws also allow consumers to sue companies for violations of the laws’ security requirements and created the California Privacy Protection Agency to enforce the state’s rules.

“Setting up these weak foundations is really damaging and really puts us in a worse direction on privacy in the U.S.,” said Hayley Tsukayama, a legislative activist for the Electronic Frontier Foundation. “Every time that one of these bills passes, Virginia being a great example, people are saying ‘This is the model you should be looking at, not California.’ ”

Amazon did not respond to requests for comment, and Microsoft declined to answer specific questions on the record.

Industry groups, however, were not shy about their support for the Virginia law and copycats around the country.

The Virginia law is a “ business and consumer friendly approach” that other states considering privacy legislation should align with, The Internet Association, an industry group that represents Big Tech, wrote in a statement to The Markup.

Big Tech’s fingerprints are all over state privacy fights

In testimony before lawmakers, tech lobbyists have criticized the state-by-state approach of making privacy legislation and said they would prefer a federal law. Tech companies offered similar statements to The Markup.

Google spokesperson José Castañeda declined to answer questions but emailed The Markup a statement: “As we make privacy and security advancements to protect consumers, we’ll continue to advocate for sensible data regulations around the world, including strong, comprehensive federal privacy legislation in the U.S.”

But at the same time, the tech and ad industries have taken a hands-on approach to shape state legislation. Mostly, industry has advocated for two provisions. The first is an opt-out approach to the sale of personal data or using it for targeted advertising, which means that tracking is on by default unless the customer finds a way to opt out of it. Consumer advocates prefer privacy to be the default setting, with users given the freedom to opt in to certain uses of their data. The second industry desire is preventing a private right of action, which would allow consumers to sue for violations of the laws.

The industry claims such privacy protections are too extreme.

“That may be a bonanza for the trial bar, but it will not be good for business,” said Dan Jaffe, group executive vice president for government relations for the Association of National Advertisers, which has lobbied heavily in states and helped write model federal legislation. TechNet, another Big Tech industry group that has been deeply engaged in lobbying state lawmakers, said that “enormous litigation costs for good faith mistakes could be fatal to businesses of all sizes.”

Through lobbying records, recordings of public testimony, and interviews with lawmakers, The Markup found direct links between industry lobbying efforts and the proliferation of these tech-friendly provisions in Connecticut, Florida, Oklahoma, and Washington. And in Texas, industry pressure has shaped an even weaker bill.

Protocol has previously documented similar efforts in Arizona, Hawaii, Illinois, and Minnesota.

Additionally, The Markup found a handful of states—particularly North Dakota and Oklahoma—in which tech lobbyists have stepped in to thwart efforts to enact stricter laws.


The path of Connecticut’s bill is illustrative of how these battles have played out. There, state Senate majority leader Bob Duff introduced a privacy bill in 2020 that contained a private right of action. During the bill’s public hearing last February, Duff said he looked out on a room “literally filled with every single lobbyist I’ve ever known in Hartford, hired by companies to defeat the bill.”

The legislation failed. Duff introduced a new version of it in 2021, and it too died in committee following testimony from interest groups funded by Big Tech, including the Internet Association and The Software Alliance.

According to Duff and Sen. James Maroney, who co-chairs the Joint Committee on General Law, those groups are now pushing a separate privacy bill, written using the Virginia law as a template. Duff said lawmakers “had a Zoom one day with a lot of big tech companies” to go over the bill’s language.

“Our legislative commissioner took the Virginia language and applied Connecticut terminology,”  Maroney said.

That industry-backed bill passed through committee unanimously on March 23.

“It’s an uphill battle because you’re fighting a lot of forces on many fronts,” Duff said. “They’re well funded, they’re well heeled, and they just hire a lot of lobbyists to defeat legislation for the simple reason that there’s a lot of money in online data.”

Google has spent $100,000 lobbying in Connecticut since 2019, when Duff first introduced a consumer data privacy bill. Apple and Microsoft have each spent $124,000, Amazon has spent $116,000, and Facebook has spent $155,000, according to the state’s lobbyist reporting database.

Microsoft declined to answer questions and instead emailed The Markup links to the testimony its company officials gave in Virginia and Washington.

The Virginia model “is a thoughtful approach to modernize United States privacy law, something which has become a very urgent need,” Ryan Harkins, the company’s senior director of public policy, said during one hearing.

Google declined to respond to The Markup’s questions about their lobbying. Apple and Amazon did not respond to requests for comment.


In Oklahoma, Rep. Collin Walke, a Democrat, and Rep. Josh West, the Republican majority leader, co-sponsored a bill that would have banned businesses from selling consumers’ personal data unless the consumers specifically opted in and gave consumers the right to sue for violations. Walke told The Markup that the bipartisan team found themselves up against an army of lobbyists from companies including Facebook, Amazon, and leading the effort, AT&T.

AT&T lobbyists persuaded House leadership to delay the bill’s scheduled March 2 hearing, Walke said. “For the whole next 24-hour period, lobbyists were pulling members off the house floor and whipping them.”

Walke said to try to get the bill through the Senate, he agreed to meetings with Amazon, internet service providers, and local tech companies, eventually adopting a “Virginia-esque” bill. But certain companies remained resistant—Walke declined to specify which ones—and the bill died without receiving a hearing.

AT&T did not respond to questions about its actions in Oklahoma or other states where it has fought privacy legislation. Walke said he plans to reintroduce the modified version of the bill again next session.


In Texas, Rep. Giovanni Capriglione first introduced a privacy bill in 2019. He told The Markup he was swiftly confronted by lobbyists from Amazon, Facebook, Google, and industry groups representing tech companies. The state then created a committee to study data privacy, which was populated in large part by industry representatives.

Facebook declined to answer questions on the record for this story.

Capriglione introduced another privacy bill in 2021, but given “Texas’s conservative nature,” he said, and the previous pushback, it doesn’t include any opt-in or opt-out requirement or a private right of action. But he has still received pushback from industry over issues like how clear and understandable website privacy policies have to be.

“The ones that were most interested were primarily the big tech companies,” he said. “I received significant opposition to making any changes” to the status quo.


The privacy bill furthest along of all pending bills is in Washington, the home state of Microsoft and Amazon. The Washington Privacy Act was first introduced in 2019 and was the inspiration for Virginia’s law. Microsoft, Amazon, and more recently Google, have all testified in favor of the bill. It passed the state Senate 48–1 in March.

A House committee considering the bill has proposed an amendment that would create a private right of action, but it is unclear whether that will survive the rest of the legislative process.

Other States

Other states—Illinois, Kentucky, Alabama, Alaska, and Colorado—have Virgina-like bills under consideration. State representative Michelle Mussman, the sponsor of a privacy bill in Illinois, and state representative Lisa Willner, the sponsor of a bill in Kentucky, told The Markup that they had not consulted with industry or made privacy legislation their priority during 2021, but when working with legislative staff to author the bills they eventually put forward, they looked to other states for inspiration. The framework they settled on was significantly similar to Virginia’s on key points, according to The Markup’s analysis.

The sponsors of bills in Alabama, Alaska, and Colorado did not respond to interview requests, and public hearing testimony or lobbying records in those states were not yet available.

The campaign against tougher bills

In North Dakota, lawmakers in January introduced a consumer data privacy bill that a coalition of advertising organizations called “the most restrictive privacy law in the United States.” It would have included an opt-in framework, a private right of action, and broad definitions of the kind of data and practices subject to the law.

It failed 75–19 in the House shortly after a public hearing in which only AT&T, data broker RELX, and industry groups like The Internet Association, TechNet, and the State Privacy and Security Coalition showed up to testify—all in opposition. And while the big tech companies didn’t directly testify on the bill, lobbying records suggest they exerted influence in other ways.

The 2020–2021 lobbyist filing period in North Dakota, which coincided with the legislature’s study and hearing on the bill, marked the first time Amazon has registered a lobbyist in the state since 2018 and the first time Apple and Google have registered lobbyists since the state began publishing lobbying disclosures in 2016, according to state lobbying records.

A Mississippi bill containing a private right of action met a similar fate. The bill’s sponsor, Sen. Angela Turner-Ford, did not respond to an interview request.

While in Florida, a bill that was originally modeled after California’s laws has been the subject of intense industry lobbying both in public and behind the scenes. On April 6, a Florida Senate committee voted to remove the private right of action, leaving a bill substantially similar to Virginia’s. State senator Jennifer Bradley, the sponsor of Florida’s bill, did not respond to The Markup’s request for comment.

Several bills that include opt-in frameworks, private rights of action, and other provisions that experts say make for strong consumer protection legislation are beginning to make their way through statehouses in Massachusetts, New York, and New Jersey. It remains to be seen whether those bills’ current protections can survive the influence of an industry keen to set the precedent for expected debate over a federal privacy law.

If the model that passed in Virginia and is moving forward in other states continues to win out, it will “really hamstring federal lawmakers’ ability to do anything stronger, which is really concerning considering how weak [that model] is,” said Jennifer Lee, the technology and liberty project manager for the ACLU of Washington. “I think it really will entrench the status quo in allowing companies to operate under the guise of privacy protections that aren’t actually that protective.”

This article by Todd Feathers was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Covid 19

The MowRo is the electric lawn-mowing robot that liberates you from home landscaping duties

TLDR: The MowRo is a robot lawn mower that cuts your grass all by itself, including programmable scheduling, a rechargeable battery, silent running, and no wasted time for you.

There are regular household chores that you don’t particularly enjoy, but they have to get done. So…you do ‘em. But when you sit back and actually think about how much time you spend handling these menial tasks over the course of your many years, the blown time becomes pretty astronomical.

Like…did you know you could spend upwards of 47 days mowing your lawn during your life? Granted, depending on the type of mower, that number could actually be as low as seven days. But still…would you really rather spend a week of your life cutting grass than doing…well, almost anything else?

The answer is a resounding no. And with the addition of the MowRo RM24 Robot Lawn Mower to your groundskeeping crew, you might not have to think about lawn mowing for more than another hour or two before you finally shuffle off this mortal coil for good one day.

Just like one of those robot vacuums that skitters around your home sucking up debris, the MowRo is the robot assigned to make sure your lawn always looks like it could host a PGA Tour event.

Housing a powerful 2,900 RPM motor, the MowRo scoots over your grass, shaving blades down to just the level you want, from a lush healthy 2.5-inch-high shag down to a closely cropped, finely-manicured sheen of grassy goodness just an inch thick.

This fully-autonomous mower allows users to set a mowing schedule that works for them, so you can have a fresh mow every weekend, twice a week, or heck, maybe even every day.

Owners lay down a low profile, no-trip perimeter wire along the outer edge of your lawn, which acts like an invisible wall to guide the MowRo’s mowing path. As for safety, the blades are housed inside a protective blade guard with a bump sensor, allowing the grass to pass through while pets, children, or even you absent-mindedly walking barefoot during a mowing session. 

This mower glides along on a Samsung 4Ah 28-volt lithium-ion rechargeable battery, so users not only avoid the billowing engine fumes of their old mower, but it emits absolutely zero greenhouse gasses in the process. And it’s ultra-quiet too, never rising above a low-key 65 decibels.

Most of the time, the unit sits inside its docking station, a protective home when it rains, when it needs to recharge its battery, or just when its robo-mowing task is complete.

Regularly $999, the MowRo RM24 Robot Lawn Mower is now $150 off, available for only $849.99.

Prices are subject to change.

Covid 19

Canadian police pull over car thinking driver is drunk — nope, it’s ‘self-driving’

Right, it’s Friday, it’s about four o’clock in the afternoon where I am right now, and I’m about to clock off, but I just had to share this HILARIOUS story from Vancouver with you.

According to the Vancouver Island Free Daily, Campbell River police pulled over a vehicle last week. Boring! But it gets better.

After some erratic maneuvers and failing to stay in its lane, officers believed that the vehicle was being driven by a drunk person. So they stopped it to investigate further.

Nothing unusual here.

When the police investigated the driver and passenger of the car, they found they were both completely sober. So what was going on?

Well, it turns out that the car had its supposed self-driving features engaged.

“Drivers need to understand that they are responsible for what their vehicle is doing,”  constable Maury Tyre told the Vancouver Island Free Daily — I didn’t make that name up, btw.

Unfortunately, it’s not clear what car the pair were driving, so we can’t say for certain what it was. But we humans can read (and stay between) the lines better than a self-driving car, so I don’t think I need to say what car was probably being driven.

All the reporting I read on this incident referred to the car as self-driving. Even though we have no idea what the car was, we can say for certain that it wasn’t a self-driving car, and didn’t have autonomous capability.

There are no consumer available vehicles that meet this standard, yet.

Yet again, this is another classic case of drivers who may have been sober, but have been autonowashed by the Big Self-Driving Cartels.

Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up? 

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

Covid 19

How to turn your ebike into a cargo-hauling car replacement

Let’s get this out of the way: an ebike can never fully replace a car. At least, not in the sense of carrying four passengers and a trunk full of groceries while being shielded from the rain. There will be times where having a car is more convenient.

But sometimes bikes are the more convenient vehicle too. As a city-dweller who used to occasionally drive, not having to park and cutting through traffic saves me a bunch of time. I’ve been riding ebikes for transportation almost exclusively for the past couple of years, and I’ve come to realize that for almost all day-to-day tasks, and even some irregular ones, an electric bike is more than enough.

Yes, you’ll have to be a little bit creative, especially if you haven’t got yourself a proper cargo ebike. But in my case, the biggest barrier for getting stuff done on two wheels was really the mental one. When I started testing ebikes, I was a noob to cycling in general and had no idea of the wealth of accessories available to help make carrying stuff easier.

What follows are some of the most useful accessories I’ve found for carrying stuff on the ebikes I’ve tested. Most I have tried myself, but for those I haven’t, I’ve done research thorough enough to be comfortable recommending them.

Some disclaimers first: this guide was written with the assumption that you’re a newcomer riding an ebike. So while all of the accessories here work just as fine on a regular bike, I’m giving little consideration to things like weight or aerodynamics. Practicality is my only concern. I also know this list doesn’t include every cargo accessory in the world, sue me.


A rear rack

The rear rack is the go-to cargo accessory because it allows you to carry stuff without affecting control over your bike too much. You can hang panniers from them (essentially tote bags meant to be attached to a bike rack, more on these in a bit), and they usually have a flat top for carrying more stuff — like a pizza. Alternatively, you can attach a bag or basket to the rack top; for a cheap DIY setup, attach a milk crate with some zip ties.

If you bought an ebike, there’s a good chance your bike already comes with a rack. If not, check with the manufacturer for compatible models; it might have one it’s tested to be compatible with your bike. In any case, there are about a bazillions racks on the market, so you’ll almost certainly be able to find one for your bike.

Many, if not most, bikes come with mounting points for a rack, but if not, there are racks designed to be virtually universal too. There are racks that can attach to your bike’s seatpost, others that attach to your wheels like the heavy-duty Old Man Mountain racks, or the Thule Pack N Pedal, which wraps around your bike’s seat stays.

Bungees and straps

If you want to attach stuff to the top of your rack, you’ll need some kind of strap to tie it down. Bungee cords are the classic choice, keeping items under tension; you can find whole kits of them for cheap. There are also cargo nets that can help tie down larger, more pliable items, or cover up a basket.

Personally though, I’m a big fan of ROK straps. These are a bit more expensive, but attach securely to your bike’s rack and only have a small elastic portion to them, keeping your items more secure than a typical bungee strap.


Panniers are probably the easiest ways to carry a small to medium set of groceries on your bike. These bags hang off the sides of your rack and are quick to attach and detach.

They usually come with some kind of handle so you can use them as shopping bags, and some also double as backpacks or messenger bags for commuters. They come in a myriad of shapes and sizes.

A front basket or rack

Rear racks are super-versatile, but they don’t allow you to see your cargo. That’s why I prefer carrying stuff on the front of my bike whenever possible.

Many ebikes have mounting points on your bike’s headtube that fixes a front rack or basket directly onto the frame. If these are available, they are probably your best bet for front cargo, as they will be more stable than accessories that attach to your handlebars or forks; those will sway when you turn the bike, making it trickier to balance.

If no frame mount for a rack is available, I’m a big fan of Wald’s quick-release front basket. This lightweight basket attaches to your handlebars and can be used as a fairly sizeable shopping basket when you arrive at your destination. Being able to remove the basket also helps keep the bike more maneuverable if you need to squeeze it through a tight staircase like me. Alternatively, there are about a million handlebar bags out there.

If going with a front basket, I definitely suggest keeping a cargo net on your bike — it’ll help prevent stuff from flying out and allow you to carry a bit more than you might’ve dared otherwise.

If you prefer a front rack that attaches to your fork, Soma’s PortFolder is pretty neat. When folded, it keeps a minimal profile and can carry some front panniers(they make those too), but it’s also able to unfold into a large flat surface for carrying multiple pizzas (you can tell that carrying pizzas is a concern of mine).

Another advantage of a front rack (as opposed to a handlebar bag or basket) is that it keeps your cargo’s center of gravity lower, helping keep your bike more stable.

And if your fork doesn’t have mounting points for a rack, the aforementioned universal Old Man Mountain and Thule Pack n Pedal racks can actually be attached to the front rack too.

A backpack

I almost never go grocery shopping on my ebike without a backpack. While on a regular bike carrying a backpack in warm weather will likely mean a sweaty back, this is much less of an issue on an ebike.

Credit: Henty

You probably have a backpack lying at home somewhere. Use it. If not, and you want something fancy that can double as a travel bag, I absolutely love Henty’s Travel Brief. It can fit 30 liters of stuff, which in my case often mean fitting a 12-roll pack of toilet paper with some room to spare. It also really is quite a nice travel bag.

A reusable bag or two

I always keep a reusable shopping bag or two with my bike that I can hang around the handlebars. The goal isn’t to use them regularly — the aforementioned accessories are all for that — but rather to serve as an overflow buffer. There’s nothing worse than going to get a bunch of groceries, realizing that you wanted to get more stuff, and then not being able to fit that last box of croissants on your setup.

You could just take a bag with you before you go shopping, but I recommend keeping one with your ebike at all times. Having an extra shopping bag handy is useful for fitting those last items — preferably lighter ones so your weight distribution doesn’t get too wonky.

A cargo trailer

But what about those times you need to just carry a lot of stuff — or something really big? That’s when a cargo trailer can be a lifesaver.

Bike trailers come in all shapes and sizes, from flatbeds to big ol’ buckets, but my favorite as a city dweller is the Burley Travoy. I reviewed it a while back, but suffice to say it’s a cross between a cargo trailer, a handtruck, and a granny cart. It folds up compact when not needed, but I’ve also used it to carry anything from groceries…

…to office chairs…

I picked up an office chair from staples with the older version of the travoy. Note I had to use separate cam straps as the uncluded tie downs straps were not long enough for this particular package.

…to a pair of large dog crates.

The Travoy comes with tie-down straps, but for larger items, I replaced them with heavy-duty cam straps. It’s an incredibly versatile trailer that only requires a little creativity for carrying oddly shaped items. Plus, you can bring it into a store to use it as a shopping cart.

It’s also a great option if you have multiple bikes, as it attaches directly to your seat post and the process only takes a few seconds. Likewise, it’s a useful accessory if you want to keep your bike light or aesthetically minimalist; it does not require anything to be permanently affixed onto your bike.

I like this thing a lot. If I had to make just one purchase for cargo purposes, it would probably be the Travoy.

Just about the only thing it’s not so great at is carrying items that need to be flat (like a bunch of pizzas) and people (well, you probably could if you really wanted to, buy tying someone down onto a bike trailer might look a little suspicious).

A child/pet trailer

Want to take your child/dog/cat/iguana with you? There are trailers for that too.

I have a dog and two cats, for which I requested a review unit of the Burley Tail Wagon — it’s a sturdy trailer that folds compact and can double as a pet stroller. The brand has built a reputation for the safety of its trailers, so it’s among the few I’d trust to haul my fluffy ones around.

Okay so, truth be told, I actually haven’t used the Tail Wagon for its primary purpose much, because my dog has extreme separation anxiety and we’re slowly working our way towards acclimating her to it.

But it’s been really great for carrying larger items, including for stuffing even more groceries and large boxes. Although it might be a little cumbersome to attach compared to panniers or the Travoy, it can fit a lot more stuff without needing tie-downs, and feels super stable over long rides.

So much more

This list only scratches the surface of how to carry stuff on your bike. There are small saddle bags, top tube bags, large bikepacking seat bags, frame bags, child seats, sidecars, and even surfboard racks. Not to mention that there are cargo bikes specifically designed to haul stuff, like the compact Tern GSD or the Long-John Urban Arrow Family.

You might have to get creative to carry some larger items, but after spending the past couple of years carrying all sorts of stuff on ebikes, it’s clear that where there’s a will (and a few useful accessories), there’s a way.

This post includes affiliate links to products that you can buy online. If you purchase them through our links, we get a small cut of the revenue.

Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up? 

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

Covid 19

Amazon’s new algorithm will spread workers’ duties across their muscle-tendon groups

Amazon is often accused of treating staff like expendable robots. And like machines, sometimes employees break down.

A 2019 survey of 145 warehouse workers found that 65% of them experienced physical pain while doing their jobs. Almost half (42%) said the pain persists even when they’re not working.

That’s a real shame — for Amazon. Those pesky injuries can slow down the company’s relentless pace of work.

But the e-commerce giant may have found a solution. Is it a more humane workload? Upgraded workers’ rights? Of course not. It’s an algorithm that switches staff around tasks that use different body parts.

[Read: The biggest tech trends of 2021, according to 3 founders]

Jeff Bezos unveiled the system in his final letter as CEO to Amazon shareholders:

We’re developing new automated staffing schedules that use sophisticated algorithms to rotate employees among jobs that use different muscle-tendon groups to decrease repetitive motion and help protect employees from MSD [musculoskeletal disorder] risks. This new technology is central to a job rotation program that we’re rolling out throughout 2021.

The world’s richest man added that Amazon has already reduced these injuries.

He said that muskoskeletal disorders at the company dropped by 32% from 2019 to 2020. More importantly, MSDs resulting in time off work decreased by more than half.

Those claims probably shouldn’t be taken at face value, however. Last year, an investigation found that Amazon had misled the public about workplace injury rates.

Bezos did acknowledge that the company still needs to do a better job for employees. But he disputed claims that employees are “treated as robots.”

Employees are able to take informal breaks throughout their shifts to stretch, get water, use the restroom, or talk to a manager, all without impacting their performance.

He added that Amazon is going to be “Earth’s Best Employer” and “Earth’s Safest Place to Work.” Reports of staff peeing in bottles, shocking injury ratesunion-bustinginvasive surveillance, and impossible performance targets suggest he’s got a long way to go.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Covid 19

A love letter to Eve Online’s tutorial

“Don’t be afraid to die!” Eve Online player “Grey Gal.”

Eve Online is an immense game with nearly two decades of player-driven history and lore behind it. To the uninitiated, beginning your life in New Eden is daunting to say the least.

For those unfamiliar, Eve Online is a free-to-play, massively-multiplayer, online game where thousands of players vie for resources, power, and combat victories in a shared universe set in space.

The gist of the game, as far as I’ve managed to experience it, is that you’ve suddenly awakened in a spaceship and you’ve got work to do. And when I say “in” a spaceship, I mean it: you’re apparently the soul of the craft. And if you die, you have to be reincarnated in another spaceship?

To be clear: I’ve only played the tutorial, so the story is a bit fuzzy still. But, the point is that I really want to learn more about the game world. Right after I go murder some pirates, learn about the economy, and upgrade my ship. And fight some players. And join a PVP group. The list goes on and on.

The next person to play the tutorial after me might decide to be an engineer or a merchant. I’m a merciless pirate hunter hellbent on ending slavery in the galaxy. At least, that’s who I think my character is after an hour-long tour-de-force in the form of (dare I say it? Yes, I dare) the perfect tutorial.

I like to fancy myself a bit of an expert on game tutorials, simply because I play about a dozen new games a week and I always complete the tutorial. And, if I’m being honest, most of them suck.

Here’s what EO gets right:


In 2021, chances are, most ‘new players’ aren’t new to gaming, just the game they’re playing for the first time. Eve Online respects the fact that I’m not logging on to play a button masher, but a complex space sim. It doesn’t bog down the first 15 minutes with pages upon pages of command descriptions.

Far too often, the developers behind complex games (I won’t name and shame, but you know who you are) will try to front load the tutorial with everything a player needs to know. It’s like they’re angrily shoving information at me, as if to declare “I gave you everything you needed to know how to play. What more could you want from me?”

But Eve Online avoids this trapping. I’ll be honest, I don’t really know what everything on the screen does. There’s like fifty little buttons. But they all have tool tips that pop up when I hover over them, so I’m confident I’ll figure things out.

Rather than teach me everything, Eve’s tutorial taught me a handful of things to do. It’s like I showed up for my first day at work and went through orientation. Nobody expects me to remember everything, but I’m eager to see what happens next.

Eve Online also respects my time. At no point in my life have I ever been eager or excited for a tutorial. I’m looking forward to experiencing the game and its tutorial is my gateway. It’s like standing in line for a ride. Sure, the ambiance is nice, but I want to get to the real game.

To that end, the tutorial could probably be completed in 20 minutes. I took longer to explore the interface and soak up the ambiance.


At the end of a well-executed tutorial I should be prepared for what’s immediately in front of me as a player and inspired to explore beyond what I’ve learned so far.

In Eve Online, this means I’m fairly certain I know what I want to do next and the game’s given me a clear path to doing it.

The tutorial lets players explore the facets of the game they’re most interested in and then sets them on that path. While it does play on some old tropes (you’ve just arrived, and you have to prove yourself to the big factions by taking on small jobs), the combination of an AI assistant, a big bright interface that basically says “click this button to advance,” and a surprisingly funny and to-the-point storyline makes this tutorial feel like it takes 10 minutes.

When I was finished I had to force myself to stop playing.


A lot of MMOs and competitive games suffer from a “hard to learn, hard to master” problem that makes it difficult for some people to get into them. I played Eve Online more than a decade ago, but I was never able to really get into it.

Over the years I’ve seen the news pieces covering the massive player-lead wars and incredible record-breaking gaming feats. I wanted to dive in and try again, but I always had reasons not to.

I’ll never catch up, I’ll never have enough time to grind the levels or resources to be competitive, I just don’t have the time.

The current tutorial dismisses these worries by focusing on the one thing that matters: you.

Eve Online is a massive game, but its tutorial focuses on the little things. You’re just a ship with somewhere to go. When you’re done learning the ropes, your job is to go make some money, explore the galaxy, or find out where you fit in among fellow combat-seekers.

I never felt like I needed to rush into the fray and, perhaps most importantly, Eve Online doesn’t expect me to be a hero and it isn’t lying to me about how the galaxy needs me to save it (just like everyone else playing the game).

By the time I was done, the tutorial had shown me that my journey in Eve Online wasn’t something I could compare to anyone else’s.

I was also lucky enough to chat with a player known as Grey Gal, who specializes in onboarding noobs like me. We spoke on the phone after I joined developer CCP’s Associate PR Marketing Specialist Páll Grétar and Community Developer Jessica Kenyon for a one-hour game demo.

Grey Gal told me about her experiences over 12 years as a player. The most interesting takeaway I had was when she laughed about the idea of new players not being able to catch up.

Per Gray Gal:

New players have just as much chance of making an impact in a battle as experienced ones. It’s more about how you fit your ship and how much skill you, as the player, have than how much you’ve grinded or how long you’ve been playing.

And that’s the sense I had after completing the simple, yet compelling tutorial for Eve Online, I felt like I was ready to contribute to this gorgeous galaxy in front of me. I wanted to find somewhere to report for duty and get started.

As Community Developer Jessica Kenyon told me:

The players that survive to go on to do great things in Eve Online are those who are predisposed to setting their own goals.

Before I began the tutorial, I was expecting the standard disjointed combination of roleplay lore and out-of-character button diagrams I’ve become used to when starting a new MMO, but Eve Online gave me a masterpiece. Its tutorial has everything I need and nothing I don’t.

I hope other developers, across all gaming genres, are paying attention.

Covid 19

The Flair 58 espresso maker is a coffee-lover’s dream machine

The best money I ever spent on coffee was buying myself the original Flair Espresso Maker.

The Flair has a cult following among espresso aficionados for making it easy to make espresso at coffee-shop quality in a compact, affordable design that requires no electricity. The roughly 6-9 bars of pressure necessary for a proper espresso are created with your own arm strength, rather than any fancy machinery.

But while the original Flair (and the Flair Pro follow-up) were capable of delivering incredible shots, there was a bit of a learning curve, and the process of actually pulling a shot was a little finicky, involving the assembly of a multitude of components with each shot.

This was particularly an issue with light roasts, which require high brewing temperatures. Likewise, delivering multiple shots in a row could be a time-consuming process.

Enter the Flair 58, currently on pre-order. It is so named because it uses the same 58 mm portafilters (the bit that holds the coffee) you find in commercial coffee machines. By introducing a larger basket, making a few design tweaks, and allowing just one electrical component — a temperature controller — the Flair 58 fixes every complaint I had about the earlier models.

At $529, it is the most expensive Flair yet, but for the quality of shots and flexibility you get, it is an absolute steal. I simply do not know how you can get a coffee maker this good, this consistent, and with so much flexibility for less money. The Flair 58 isn’t competing with entry-level coffee machines — it aims straight for the multi-thousand-dollar espresso makers of high-end coffee shops.

Here’s how making coffee works on the new model:

  • Turn on the Flair 58’s temperature sensor to preheat the brew chamber for the right roast setting: dark, medium, or light. This takes about 30 to 90 seconds.
  • Boil water in a kettle while you grind your coffee. It’s essential to have a good burr grinder and freshly roasted beans.
  • Place your grounds into the portafilter, tamp — a nice hefty one is included — and lock the portafilter onto the Flair 58.
  • Pour the water into the brew chamber
  • Pull down the lever for your shot, maintaining adequate pressure (as visible on the pressure gauge) for roughly 30-50 seconds.
  • Empty the coffee grounds, repeat (except your water is already boiling now).

It takes hardly any more time than making coffee on a traditional machine. The only limitation is really how quickly you can grind beans, which is no different than on a commercial or high-end machine setup.

If you’re not familiar with how the earlier Flair models worked, you should read my reviews of the original and Pro. But suffice to say, the Flair 58 significantly cuts down on the time to make a shot, especially if you’re making multiple shots; I always make a shot for my girlfriend before my own.

It is also much more comfortable when preparing for a second shot, as all the hot components are isolated (taking apart the brew chamber on the old Flairs always felt a bit too close to burn hazard). And the longer lever arm makes reaching 9 bars of pressure a cinch.

As for the quality of the shots, they’re sublime. Admittedly, I haven’t tried very many home coffee machines, but that’s only because I was spoiled early on by the Flair. The Flair’s lever allows you to intuitively do something called pressure profiling, something you’d normally have to spend thousands of dollars for.

The pressure gauge and lever make it easy to control your coffee’s extraction.

Despite the fact I make coffee almost daily at home, I still like to visit my local and new coffee shops fairly often. But I still don’t feel like any can give me a better shot than I can make for myself with the Flair 58.

Though truth be told, I already felt like even the original Flair gave coffee shops a run for their money, but the lack of temperature control made it harder to be consistent with light roasts. But with the Flair 58, the only limitation is your own skill.

This actually wasn’t that great of a shot compared to what the Flair 58 can produce because I ran out of fresh beans, but I had to get at least one photo of a coffee extraction in here.

I’m aware this is more of a rave than a review, so I can nitpick about some things:

  • The use of electricity and the power cord sticking out of the brew chamber makes the 58 just a little less elegant than its predecessors.
  • It’s larger than its predecessors, so it’ll take more counter space. You also need a fair bit of vertical clearance for the lever arm, and it’s not quite as portable either.
  • It might take you a couple of seconds longer to make coffee than on a regular machine.
  • It requires a bit of a finer grind than the earlier Flair models.
  • It is perhaps a little less forgiving than the earlier models, so you’ll have to refine your tamp and dial in your grind a hair more, although the variable pressure still makes it much more forgiving than a standard machine.
  • It’s hard to get a yield of over 60 grams.
  • There’s still no way to steam milk, something you’ll find on most espresso machines.

For that last one, I’ve found the $39 Submininimal Nanofoamer to be a fantastic solution. Heat up milk (they also sell a stovetop Milk Jug), and the Nanofoamer handles the texture, allowing me to make latte art at least good as I can with the steamer on my Breville Bambino (which given my lack of art skills, is not very good).

Although the Flair 58 is the company’s most advanced model and makes the best shots, it is also the easiest and fastest to use for barista-quality shots.

I wouldn’t be surprised if actual coffee shops started to adopt a multitude of Flair 58’s instead of spending money on a fancy La Marzoco or La Pavoni. The Flair 58 will likely require a lot less maintenance too due to its deceptively simple design.

Newcomers to making their own espresso without a good grinder may be better served by the $119 Flair Neo, and the Classic and Pro models still make fantastic shots for less money.

But if you want the best of the best, the Flair 58 offers improved usability and can make some of the best coffee you’ll ever have at a price that’s still basically a steal in the world of espresso.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

Covid 19

How a theoretical mouse could crack the stock market

A team of physicists at Emory University recently published research indicating they’d successfully managed to reduce a mouse’s brain activity to a simple predictive model. This could be a breakthrough for artificial neural networks. You know: robot brains.

Let there be mice: Scientists can do miraculous things with mice such as grow a human ear on one’s back or control one via computer mouse. But this is the first time we’ve heard of researchers using machine learning techniques to grow a theoretical mouse brain.

Per a press release from Emory University:

The dynamics of the neural activity of a mouse brain behave in a peculiar, unexpected way that can be theoretically modeled without any fine tuning.

In other words: We can observe a mouse’s brain activity in real-time, but there are simply too many neuronal interactions for us to measure and quantify each and every one – even with AI. So the scientists are using the equivalent of a math trick to make things simpler.

How’s it work? The research is based on a theory of criticality in neural networks. Basically, all the neurons in your brain exist in an equilibrium between chaos and order. They don’t all do the same thing, but they also aren’t bouncing around randomly.

The researchers believe the brain operates in this balance in much the same way other state-transitioning systems do. Water, for example, can change from gas to liquid to solid. And, at some point during each transition, it achieves a criticality where its molecules are in either both states or neither.

[Read: The biggest tech trends of 2021, according to 3 founders]

The researchers hypothesized that brains, organic neural networks, function under the same hypothetical balance state. So, they ran a bunch of tests on mice as they navigated mazes in order to establish a database of brain data.

Next, the team went to work developing a working, simplified model that could predict neuron interactions using the experimental data as a target. According to their research paper, their model is accurate to within a few percentage points.

What’s it mean? This is early work but, there’s a reason why scientists use mice brains for this kind of research: because they’re not so different from us. If you can reduce what goes on in a mouse’s head to a working AI model, then it’s likely you can eventually scale that to human-brain levels.

On the conservative side of things, this could lead to much more robust deep learning solutions. Our current neural networks are a pale attempt to imitate what nature does with ease. But the Emory team’s mouse models could represent a turning point in robustness, especially in areas where a model is likely to be affected by outside factors.

This could, potentially, include stronger AI inferences where diversity is concerned and increased resilience against bias. And other predictive systems could benefit as well, such as stock market prediction algorithms and financial tracking models. It’s possible this could even increase our ability to predict weather patterns over long periods of time.

Quick take: This is brilliant, but it’s actual usefulness remains to be seen. Ironically, the tech and AI industries are also at a weird, unpredictable point of criticality where brute-force hardware solutions and elegant software shortcuts are starting to pull away from each other.

Still, if we take a highly optimistic view, this could also be the start of something amazing such as artificial general intelligence (AGI) – machines that actually think. No matter how we arrive at AGI, it’s likely we’ll need to begin with models capable of imitating nature’s organic neural nets as closely as possible. You got to start somewhere.