Covid 19

Hey millennials, stop ruining emoji for Gen Z

When I saw the news that Apple would be releasing 217 new emojis into the world, I did what I always do: I asked my undergraduates what it meant to them. “We barely use them anymore,” they scoffed. To them, many emojis are like overenthusiastic dance moves at weddings: reserved for awkward millennials. “And they use them all wrong anyway,” my cohort from generation Z added earnestly.

My work focuses on how people use technology, and I’ve been following the rise of emoji for a decade. With 3,353 characters available and 5 billion sent each day, emojis are now a significant language system.

When the emoji database is updated, it usually reflects the needs of the time. This latest update, for instance, features a new vaccine syringe and more same-sex couples.

But if my undergraduates are anything to go by, emojis are also a generational battleground. Like skinny jeans and side partings, the “laughing crying emoji,” better known as 😂, fell into disrepute among the young in 2020 – just five years after being picked as the Oxford Dictionaries’ 2015 Word of the Year. For gen Z TikTok users, clueless millennials are responsible for rendering many emojis utterly unusable – to the point that some in gen Z barely use emojis at all.

[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]

Research can help explain these spats over emojis. Because their meaning is interpreted by users, not dictated from above, emojis have a rich history of creative use and coded messaging. Apple’s 217 new emojis will be subjected to the same process of creative interpretation: accepted, rejected, or repurposed by different generations based on pop culture currents and digital trends.

Two emojis of a syringe - one dripping with blood, one with clear liquid
Previously, the syringe emoji suggested blood extraction. The new, updated emoji looks more like a vaccine. Apple/Emojipedia

Face the facts

When emojis were first designed by Shigetaka Kurita in 1999, they were intended specifically for the Japanese market. But just over a decade later, the Unicode Consortium, sometimes described as “the UN for tech,” unveiled these icons to the whole world.

In 2011, Instagram tracked the uptake of emojis through user messages, watching how 🙂 eclipsed 🙂 in just a few years. Old-style smileys, using punctuation marks, now look as outdated as Shakespearean English on our LED screens: a sign of fogeyness in baby boomers (people born between 1946 and 1964) or an ironic throwback for the hipsters of gen Z.

The Unicode Consortium now meets each year to consider new types of emoji, including emojis that support inclusivity. In 2015, a new range of skin colors was added to existing emojis. In 2021, the Apple operating system update will include mixed-race and same-sex couples, as well as men and women with beards.

Bitter boomers?

Not everyone has been thrilled by the rise of emoji. In 2018, a Daily Mail headline lamented that “Emojis are ruining the English language,” citing research by Google in which 94% of those surveyed felt that English was deteriorating, in part because of emoji use.

But such criticisms, which are sometimes leveled by boomers, tend to misinterpret emojis, which are after all informal and conversational, not formal and oratory. Studies have found no evidence that emojis have reduced overall literacy.

On the contrary, it appears that emojis actually enhance our communicative capabilities, including language acquisition. Studies have shown how emojis are an effective substitute for gestures in non-verbal communication, bringing a new dimension to text.

A 2013 study, meanwhile, suggested that emojis connect to the area of the brain associated with recognizing facial expressions, making a 😀 as nourishing as a human smile. Given these findings, it’s likely that those who reject emojis actually impoverish their language capabilities.

Creative criticism

The conflict between gen Z and millennials, meanwhile, emerges from confused meanings. Although the Unicode Consortium has a definition for each icon, including the 217 Apple are due to release, out in the wild they often take on new meanings. Many emojis have more than one meaning: a literal meaning, and a suggested one, for instance. Subversive, rebellious meanings are often created by the young: today’s gen Z.

The aubergine 🍆 is a classic example of how an innocent vegetable has had its meaning creatively repurposed by young people. The brain 🧠 is an emerging example of the innocent-turned-dirty emoji canon, which already boasts a large corpus.

Three emojis, one blowing out air, one with spiral eyes, one in clouds
These three emojis will also hit iPhones with Apple’s latest update. Their meaning is yet to be decided. Emojipedia/Apple

And it doesn’t stop there. With gen Z now at the helm of digital culture, the emoji encyclopedia is developing new ironic and sarcastic double meanings. It’s no wonder that millennials can’t keep up, and keep provoking outrage from younger people who consider themselves to be highly emoji-literate.

Emojis remain powerful means of emotional and creative expression, even if some in gen Z claim they’ve been made redundant by misuse. This new batch of 217 emojis will be adopted across generations and communities, with each staking their claim to different meanings and combinations. The stage is set for a new round of intergenerational mockery.The Conversation

This article by Mark Brill, Senior Lecturer, School of Games, Film and Animation, Birmingham City University is republished from The Conversation under a Creative Commons license. Read the original article.

Covid 19

Can auditing eliminate bias from algorithms?

For more than a decade, journalists and researchers have been writing about the dangers of relying on algorithms to make weighty decisions: who gets locked up, who gets a job, who gets a loan — even who has priority for COVID-19 vaccines.

Rather than remove bias, one algorithm after another has codified and perpetuated it, as companies have simultaneously continued to more or less shield their algorithms from public scrutiny.

The big question ever since: How do we solve this problem? Lawmakers and researchers have advocated for algorithmic audits, which would dissect and stress-test algorithms to see how they work and whether they’re performing their stated goals or producing biased outcomes. And there is a growing field of private auditing firms that purport to do just that. Increasingly, companies are turning to these firms to review their algorithms, particularly when they’ve faced criticism for biased outcomes, but it’s not clear whether such audits are actually making algorithms less biased — or if they’re simply good PR.

Algorithmic auditing got a lot of press recently when HireVue, a popular hiring software company used by companies like Walmart and Goldman Sachs, faced criticism that the algorithms it used to assess candidates through video interviews were biased.

HireVue called in an auditing firm to help and in January touted the results of the audit in a press release.

The audit found the software’s predictions ‘work as advertised with regard to fairness and bias issues,’ HireVue said in a press release, quoting the auditing firm it hired, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).

But despite making changes to its process, including eliminating video from its interviews, HireVue was widely accused of using the audit — which looked narrowly at a hiring test for early career candidates, not HireVue’s candidate evaluation process as a whole — as a PR stunt.

Articles in Fast Company, VentureBeat, and MIT Technology Review called out the company for mischaracterizing the audit.

HireVue said it was transparent with the audit by making the report publicly available and added that the press release specified that the audit was only for a specific scenario.

“While HireVue was open to any type of audit, including one that involved looking at our process in general, ORCAA asked to focus on a single use case to enable concrete discussions about the system,” Lindsey Zuloaga, HireVue’s chief data scientist, said in an email. “We worked with ORCAA to choose a representative use case with substantial overlap with the assessments most HireVue candidates go through.”

[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]

But algorithmic auditors were also displeased about HireVue’s public statements on the audit.

“In repurposing [ORCAA’s] very thoughtful analysis into marketing collateral, they’re undermining the legitimacy of the whole field,” Liz O’Sullivan, co-founder of Arthur, an AI explainability and bias monitoring startup, said.

And that is the problem with algorithmic auditing as a tool for eliminating bias: Companies might use them to make real improvements, but they might not. And there are no industry standards or regulations that hold the auditors or the companies that use them to account.

What is algorithmic auditing — how does it work?

Good question — it’s a pretty undefined field. Generally, audits proceed a few different ways: by looking at an algorithm’s code and the data from its results, or by viewing an algorithm’s potential effects through interviews and workshops with employees.

Audits with access to an algorithm’s code allow reviewers to assess whether the algorithm’s training data is biased and create hypothetical scenarios to test effects on different populations.

There are only about 10 to 20 reputable firms offering algorithmic reviews, Rumman Chowdhury, Twitter’s director of machine learning ethics and founder of the algorithmic auditing company Parity, said. Companies may also have their own internal auditing teams that look at algorithms before they’re released to the public.

In 2016, an Obama administration report on algorithmic systems and civil rights encouraged the development of an algorithmic auditing industry. Hiring an auditor still isn’t common practice, though, since companies have no obligation to do so, and according to multiple auditors, companies don’t want the scrutiny or potential legal issues that that scrutiny may raise, especially for products they market.

“Lawyers tell me, ‘If we hire you and find out there’s a problem that we can’t fix, then we have lost plausible deniability and we don’t want to be the next cigarette company,’ ” ORCAA’s founder, Cathy O’Neil, said. “That’s the most common reason I don’t get a job.”

For those that do hire auditors, there are no standards for what an “audit” should entail. Even a proposed New York City law that requires annual audits of hiring algorithms doesn’t spell out how the audits should be conducted. A seal of approval from one auditor could mean much more scrutiny than that from another.

And because audit reports are also almost always bound by nondisclosure agreements, the companies can’t compare each other’s work.

“The big problem is, we’re going to find as this field gets more lucrative, we really need standards for what an audit is,” said Chowdhury. “There are plenty of people out there who are willing to call something an audit, make a nice looking website and call it a day, and rake in cash with no standards.”

And tech companies aren’t always forthcoming, even with the auditors they hire, some auditors say.

“We get this situation where trade secrets are a good enough reason to allow these algorithms to operate obscurely and in the dark, and we can’t have that,” Arthur’s O’Sullivan said.

Auditors have been in scenarios where they don’t have access to the software’s code and so risk violating computer access laws, Inioluwa Deborah Raji, an auditor and a research collaborator at the Algorithmic Justice League, said. Chowdhury said she has declined audits when companies demanded she allows them to review them before public release.

For HireVue’s audit, ORCAA interviewed stakeholders including HireVue employees, customers, job candidates, and algorithmic fairness experts, and identified concerns that the company needed to address, Zuloaga said.

ORCAA’s evaluation didn’t look at the technical details of HireVue’s algorithms — like what data the algorithm was trained on, or its code—though Zuloaga said the company did not limit auditors’ access in any way.

“ORCAA asked for details on these analyses but their approach was focused on addressing how stakeholders are affected by the algorithm,” Zuloaga said.

O’Neil said she could not comment on the HireVue audit.

Many audits are done before products are released, but that’s not to say they won’t run into problems, because algorithms don’t exist in a vacuum. Take, for example, when Microsoft built a chatbot that quickly turned racist once it was exposed to Twitter users. 

“Once you’ve put it into the real world, a million things can go wrong, even with the best intentions,” O’Sullivan said. “The framework we would love to get adopted is there’s no such thing as good enough. There are always ways to make things fairer.”

So some prerelease audits will also provide continuous monitoring, though it’s not common. The practice is gaining momentum among banks and health care companies, O’Sullivan said.

O’Sullivan’s monitoring company installs a dashboard that looks for anomalies in algorithms as they are being used in real-time. For instance, it would alert companies months after launch if their algorithms were rejecting more women applicants for loans.

And finally, there’s also a growing body of adversarial audits, largely conducted by researchers and some journalists, which scrutinize algorithms without a company’s consent. Take, for example, Raji and Joy Buolamwini, founder of the Algorithmic Justice League, whose work on Amazon’s Rekognition tool highlighted how the software had racial and gender bias, without the company’s involvement.

Do companies fix their algorithms after an Audit?

There are no guarantee companies will address the issues raised in an audit.

“You can have a quality audit and still not get accountability from the company,” said Raji. “It requires a lot of energy to bridge the gap between getting the audit results and then translating that into accountability.”

Public pressure can at times push companies to address the algorithmic bias in the technology — or audits that weren’t performed at the behest of the tech firm and covered by a nondisclosure agreement.

Raji said the Gender Shades study, which found gender and racial bias in commercial facial recognition tools, named companies like IBM and Microsoft to spark a public conversation around it.

But it can be hard to create buzz around algorithmic accountability, she said.

While bias in facial recognition is relatable — people can see photos and the error rates and understand the consequences of racial and gender bias in the technology — it may be harder to relate to something like bias in interest-rate algorithms.

“It’s a bit sad that we rely so much on public outcry,” Raji said. “If the public doesn’t understand it, there is no fine, there are no legal repercussions. And it makes it very frustrating.”

So what can be done to improve algorithmic auditing? 

In 2019, a group of Democratic lawmakers introduced the federal Algorithmic Accountability Act, which would have required companies to audit their algorithms and address any bias issues the audits revealed before they’re put into use.

AI For the People’s founder Mutale Nkonde was part of a team of technologists that helped draft the bill and said it would have created government mandates for companies to both audits and follow through on those audits.

“Much like drug testing, there would have to be some type of agency like the Food and Drug Administration that looked at algorithms,” she said. “If we saw the disparate impact, then that algorithm wouldn’t be released to the market.”

The bill never made it to a vote.

Sen. Ron Wyden, a Democrat from Oregon, said he plans to reintroduce the bill with Sen. Cory Booker (D-NJ) and Rep. Yvette Clarke (D-NY), with updates to the 2019 version. It’s unclear if the bill would set standards for audits, but it would require that companies act on their results.

“I agree that researchers, industry, and the government need to work toward establishing recognized benchmarks for auditing AI, to ensure audits are as impactful as possible,” Wyden said in a statement. “However, the stakes are too high to wait for full academic consensus before Congress begins to take action to protect against bias tainting automated systems. It’s my view we need to work on both tracks.”

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Published February 27, 2021 — 14:00 UTC

Covid 19

The UI and UX work together in this nine-course training package to make you a web design pro

The UI and UX work together in this nine-course training package to make you a web design pro

Credit: Faizur Rehman/Unsplash

TLDR: From creating brilliant user interfaces to understanding user experience needs, The Complete Become a UI/UX Designer Bundle is how you build a website or app the right way.

There’s little more critical to all of web development than the ultimate experience a user has engaging with that app or site. Considering that importance, it’s always a little strange when developers all too often fall into the trap of forgetting that humans will one day have to navigate the system they’re now intricately building.

Use all the cutting edge design elements and exotic architecture you like, but if it potentially discourages users from engaging with your project, it’s on a fast track to failure.

Creators can make sure that user interface and user experience issues are in total alignment with the training in The Complete Become a UI/UX Designer Bundle ($34.99, over 90 percent off, from TNW Deals).

The package is a deep dive into the ways that form and function can meld together seamlessly. Over nine courses, instructor and digital entrepreneur Juan Galvan covers everything from web design to sales funnels and business development to help both sides of the creative brain work hand in hand to produce products people want.

The package begins with a pair of courses introducing students to the hard tech and soft artsy sides of web creation. Introduction to the Web Industry explores the phases of web development, team roles and responsibilities, and successful project management frameworks. Meanwhile, The Foundations of Graphic Design delve into the art of web development, including practical hands-on work with the psychology of color, fonts and icons, basic graphic design software tools and more.

Next, a handful of courses turn hard into the dynamics of UI and UX, examining the customer journey and behavioral psychology and influence that drive UX creation, as well as the user interface pieces proven to work for designing everything from landing pages and websites to mobile apps. There’s even a course here in how to launch a career as a UI and UX expert.

This training is rounded out with a collection of courses getting into some of the hardcore business realities of design work from creating highly profitable sales funnels to business development tactics right up to the steps for building your own profitable web design agency.

All the training in The Complete Become a UI/UX Designer Bundle is a $1,800 value, but right now, it’s all available and hundreds off the regular price, down to only $34.99.

Prices are subject to change.

Read next: This tiny particle accelerator fits in a large room — making it much more practical than CERN’s

Covid 19

This tiny particle accelerator fits in a large room — making it much more practical than CERN’s

In 2010, when scientists were preparing to smash the first particles together within the Large Hadron Collider (LHC), sections of the media fantasized that the EU-wide experiment might create a black hole that could swallow and destroy our planet. How on Earth, columnists fumed, could scientists justify such a dangerous indulgence in the pursuit of abstract, theoretical knowledge?

But particle accelerators are much more than enormous toys for scientists to play with. They have practical uses too, though their sheer size has, so far, prevented their widespread use. Now, as part of large-scale European collaboration, my team has published a report that explains in detail how a far smaller particle accelerator could be built – closer to the size of a large room, rather than a large city.

Inspired by the technological and scientific know-how of machines like the LHC, our particle accelerator is designed to be as small as possible so it can be put to immediate practical use in industry, in healthcare, and in universities.

Collider scope

The biggest collider in the world, the LHC, uses particle acceleration to achieve the astonishing speeds at which it collides particles. This system was used to measure the sought-after Higgs boson particle – one of the most elusive particles predicted by the Standard Model, which is our current model to describe the structure and operation of the universe.

Less giant and glamorous particle accelerators have been around since the early 1930s, performing useful jobs as well as causing collisions to help our understanding of fundamental science. Accelerated particles are used to generate radioactive materials and strong bursts of radiation, which are crucial for healthcare processes such as radiotherapy, nuclear medicine, and CT scans.

The typical downside to accelerators is that they tend to be bulky, complex to run, and often prohibitively expensive. The LHC represents a pinnacle of experimental physics, but it is 27 kilometers (17 miles) in circumference and costs 6.5 billion Swiss francs (£5.2 billion) to build and test. The accelerators currently installed in select hospitals are smaller and cheaper, but they still cost tens of millions of pounds and require 400x400m of space for installation. As such, only large regional hospitals can afford the money and the space to host a radiotherapy department.

[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]

Why exactly do accelerators need to be so big? The simple answer is that if they were any smaller, they’d break. Since they’re based on solid materials, ramping up the power too much would tear the system apart, creating a very expensive mess.

A large yellow circle drawn over an aerial view of fields
The Large Hadron Collider is a vast looped system on the France-Switzerland border. Cern

Need for speed

We set out to find a way to make smaller, cheaper particle accelerators for use in a wider range of hospitals – from the large and regional to the small and provincial.

Our team worked on the premise that to accelerate particles you actually have two options: either give them a strong boost over a short distance, or lots of small nudges over a long one – which is how the LHC works.

It’s a bit like reaching 100mph in a vehicle: you can either slowly accelerate in a truck over a long period of time, or you can put your foot down in a sports car and get there in a matter of seconds. Conventional accelerators are a bit like trucks: reliable and docile, but slow. We’ve been searching for a sports car alternative.

We found that alternative in plasma. The beauty of plasma is that it’s just composed of an ionized gas: a gas that’s been broken down to its tiniest components. As such, it doesn’t have the same limit on the power that can be applied to it as a solid system. In effect, you can’t break something that is already broken.

A man holds a clear component in front of his eye. Behind him is a large yellow pipe
A researcher holding a section of our novel particle accelerator. Behind is the corresponding section in a traditional accelerator. EuPRAXIA Conceptual Design Report, Author provided

It’s in this sense that plasmas can sustain much higher accelerating powers – up to a thousand times larger than a solid-state accelerator. The higher the power, the shorter time and distance it takes to accelerate particles, and this leads to smaller, cheaper accelerators.

Our accelerator uses powerful lasers to “shake” the plasmas it contains, moving their particles about in a way that creates waves. It’s a little like the wake left behind by a boat (the laser) on a lake (the plasma). Like a surfer, a beam placed on one of these waves can then be pushed forward by it, constantly accelerating. These waves within plasmas are very small (sub-millimeter) and very powerful, which is what allows the overall accelerator to be extremely small.

Plasma perks

Plasma-based particle accelerators like ours will need 100 times less space than existing designs, reducing the space required for installation from 400x400m to just 40x40m. The hardware needed to build our accelerator is cheaper to install, run and maintain. Overall, we expect our plasma accelerator to reduce the cost of installing an accelerator in a hospital by a factor of ten.

Four different scanned images of a mouse
A mouse embryo scanned with our machine (left column) and traditional scans (right column).

Besides these two advantages, our accelerator can perform certain new functions that existing accelerators cannot. For instance, plasma-based accelerators can provide detailed X-rays of biological samples with far greater clarity than those that exist today. By providing a better image of the inside of a human body, could help doctors find cancer at a much earlier stage, dramatically increasing the chance of successfully treating the illness.

The same ultra-high resolution imaging can also help spot the early signs of cracks and defects on machinery, at a nanometer scale. Faults related to such defects are regarded as one of the “six big losses” well known to manufacturers. Their early detection by our accelerator could help extend the lifetime of high-precision, high-quality components in heavy industry and manufacturing.

Accelerator rollout

The European Strategy Forum on Research Infrastructures is assessing the design report, with a decision expected in summer 2021. If successful, construction of the first two prototypes is expected to be completed by 2030, with access to external users to be granted immediately after.

Several years of interdisciplinary research were needed for us to form the first detailed and realistic design of a machine of this kind. Our plasma accelerator is the most recent example of how obscure, abstract, fundamental physics can enter into our everyday lives – cutting research costs, improving manufacturing, and helping to save lives.

This article by Gianluca Sarri, Reader (Associate Professor) at the School of Mathematics and Physics, Queen’s University Belfast is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Can auditing eliminate bias from algorithms?

Covid 19

The US space policy keeps changing — at the expense of the next Moon landing

Harrison Schmitt and Eugene Cernan blasted off from the Taurus-Littrow valley on the Moon in their lunar module Challenger on December 14, 1972. Five days later, they splashed down safely in the Pacific, closing the Apollo 17 mission and becoming the last humans to visit the lunar surface or venture anywhere beyond low-Earth orbit.

Now the international Artemis program, lead by NASA, is aiming to put humans back on the Moon by 2024. But it is looking increasingly likely that this goal could be missed.

Image of President Nixon welcoming astronauts aboard the U S S Hornet.
President Nixon welcomes astronauts aboard the USS Hornet. wikipedia

History shows just how vulnerable space programs, which require years of planning and development spanning several administrations, are. After Apollo 17, NASA had plans for several further lunar Apollo missions, even including a possible flyby of Venus. But budget cuts in the early 1970s and a reprioritizing of human spaceflight to focus on the Skylab project precluded any further lunar missions at that time.

It was not until July 20, 1989, the 20th anniversary of the Apollo 11 landing, that President HW Bush inaugurated the Space Exploration Initiative. This involved the construction of a space station called Freedom, which would later become the International Space Station, aimed at returning humans to the Moon, and eventually undertaking crewed missions to Mars.

The project was to take place over an approximately 30-year time frame. The first human return flights to the Moon would take place in the late 1990s, followed by the establishment of a lunar base in the early 2010s. The estimated cost for the full program, including the Mars missions, was US$500 billion (£350 billion) spread over 20-30 years. This was a fraction of what would be spent on the Iraq War in 2003 but, the project nevertheless ran into opposition in the Senate, and was later canceled by the Clinton administration in 1996.

Another eight years would pass before, in 2004, President GW Bush, partly as a response to the Space Shuttle Columbia disaster, announced a revitalized Vision for Space Exploration. In response, NASA began the Constellation program, which would oversee the completion of what was now the International Space Station and then retire the Space Shuttle. It would also involve the development of two new crewed spacecraft: the Orion Crew Exploration Vehicle and the Altair Lunar Surface Access Module.

[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]

Orion, optimized for extended trips beyond low-Earth orbit, was to be developed by 2008, with the first crewed mission no later than 2014, and the first astronauts on the Moon by 2020. To lift the Orion and Altair spacecraft a new series of launchers would be developed under the name Ares, with Ares V having lift capability more akin to the massive Saturn V rockets of the Apollo era.

President Obama took office in 2009 and in 2010 instituted a review of US human spaceflight – the Augustine Commission. It found that the Constellation program was unsustainable with current NASA funding levels, was behind schedule, and that a human Mars mission was not possible with current technology. The prototype of the Ares I rocket was nonetheless launched on a successful test flight from the Kennedy Space Center on October 28, 2009.

The Constellation program was canceled by President Obama in 2010. This was the same year in which private company SpaceX made their first flight with the Falcon 9 rocket. Obama’s space plans were praised by some, including SpaceX’s founder Elon Musk, but criticized by others, including several Apollo astronauts.

The only significant survivor of Constellation was the Orion spacecraft which was repurposed and renamed the Orion Multi-Purpose Crew Vehicle or Orion MPCV. The Augustine Commission recommended a series of more modest space exploration goals for the US, which included Orion flights to near-Earth asteroids or to the moons of Mars, rather than the planet’s surface. Orion’s first, and so far, only test flight in space (without astronauts) took place on December 5, 2014.

The future of Artemis

In December 2017, President Donald Trump signed “Space Policy Directive 1,” which reoriented NASA to a lunar landing by 2024. NASA implemented the Artemis program in the same year, and it has been endorsed by the new Biden administration. This is the first time in decades that a new US administration has continued with the deep space human spaceflight policies of the previous one.

Artemis is also an international program, with the Lunar Gateway — an international orbital outpost at the Moon – being an essential part of the project. The international nature of Artemis might make the program more robust against policy changes, although the Lunar Gateway has already been delayed.

Officially, the first uncrewed test flight of Orion to lunar orbit, Artemis 1, is scheduled for later this year, with the 2024 return to the lunar surface still on the books. The effects of the pandemic and recent engineering concerns with the new and still unflown Space Launch System, may push this back. Furthermore, in 2020 NASA requested US$3.2 billion (£2.3 billion) in development costs for the Human Lander System, a critical component of the first lunar landing mission, Artemis 3. Congress approved only a fraction of what was requested, putting the 2024 landing date in further jeopardy.

A delay of any more than a year would move Artemis 3 beyond the end of President Biden’s first term in office. This would make it vulnerable to the many vagaries of US deep space human spaceflight policy that we have seen for most of the spaceflight era.

By contrast, NASA’s Mars Exploration Program, which began in 1993 and whose goals are driven primarily by scientists rather than politicians, has resulted in a series of highly successful robotic orbiters and landers, most recently the spectacular landing of the Perseverance Rover at Jezero Crater. Undoubtedly, the robotic exploration of Mars carries less political weight than human missions and is considerably cheaper – with no inherent risks to astronauts.

If the current Artemis 3 schedule holds, then 52 years will have passed between Cernan and Schmitt departing the lunar surface in Challenger and the next human visitors to the Moon, in 2024.The Conversation

This article by Gareth Dorrian, Post Doctoral Research Fellow in Space Science, University of Birmingham is republished from The Conversation under a Creative Commons license. Read the original article.

Read next: Hey millennials, stop ruining emoji for Gen Z

Covid 19

You SHOULD wave at the end of video calls — here’s why

It’s been brought to my attention that some people think you shouldn’t wave at the end of video calls. Those people are incorrect.

Zapier was remote before it was cool/mandatory. We’ve been running on video calls for almost a decade, and we’re all big end-of-meeting wavers. I did a quick Slack poll (which was unnecessary because I already know that every call here ends with a lot of waving), and the group unanimously favors waving. I literally can’t remember the last time we all agreed on anything — it’s possible it’s never happened before.

Do you wave at the end of Zoom calls? (you have to right to not respond, anything you say can be used against you on the Zapier blog)
Not a single “no” vote! This never happens. (That number only got higher, by the way.)

Here’s the definitive ruling for the entire internet, from now until the end of time: waving at the end of video calls is good, and no one should feel bad for doing it. Ever.

You might be wondering: Justin, who put you in charge of Zoom etiquette? My answer: Google. Google put me in charge of Zoom etiquette.

The featured snippet for "when to mute on a video call" is Justin's article about it
Featured snippet levels of authority

So if you disagree, you are wrong. But let’s dig into why I’m so right.

Video calls are different from in-person meetings

Waving at the end of a video meeting might feel odd because no one waves at the end of in-person meetings. And sure: it would be weird to wave at a bunch of people you’re sharing a conference room table with.

But video calls aren’t like real-life meetings — at all. At the end of in-person meetings, you get to stand up at the same time as everyone else. You maybe walk away or stay for a bit and chat. What you don’t do is click a red button and suddenly disappear.

Video calls end suddenly. There’s nothing organic about it. Spending a few seconds waving and saying goodbye provides a sort-of-organic end and makes the whole thing feel more human. Internet linguist Gretchen McCulloch put it well:

One of the big differences with walking out of a meeting room vs a videocall is that you can still expect to see someone in the hallway after a physical meeting, which is not the case at all in videochat!

So we do need a fuller farewell, which a wave can accomplish!

I couldn’t agree with this more. Humans aren’t machines — we’re social animals. We want to feel connected to each other, even in a work context. Suddenly hanging up feels inhuman (because it is). Waving and saying goodbye solves this problem.

Waving is corny, and corny is good

The other day, I stumbled upon this charming op-ed, wherein author Alexander McCall Smith recalls being told by a well-meaning coworker that you’re not “supposed” to wave at the end of Zoom calls:

Apparently, you just don’t do that, and the same goes for the moment when you first see the other participants on a call. I have been waving to them with unconcealed delight at the sheer miracle of the technology. That, apparently, is not only a very uncool thing to do, it is just plain bad Zoom manners.

This made me sad. Here’s a person who’s happy to see other people during a very difficult year, and that happiness takes the form of a wave. And he’s told not to do this because it’s not cool — rude, even.

I mean, it’s true: waving at the end of a meeting isn’t cool. The cool thing to do is to just, like, mysteriously disappear, into the night, without ever acknowledging that you appreciate another human person.

Work is better if we all feel safe expressing appreciation for each other, and waving at the end of meetings is one small way to do that. It’s a little embarrassing, aggressively corny, and serves no purpose other than sincerely acknowledging the other people in the call. But that’s why it’s great.

I think our workplace is better because everyone waves. Yours would be too.

This article by Justin Pot was originally published on the Zapier blog and is republished here with permission. You can read the original article here.

Published February 26, 2021 — 08:28 UTC

Covid 19

AI-Powered information operations and new citizenship

Digital information is power, and today citizens have this new power at their fingertips, channeled through reactions, comments, shares, saves and searches on our everyday digital platforms. However, this new power is ubiquitous and its direct effects remain obfuscated by the AI-powered black boxes of tech giants. 

Unfortunately, we’re many times too eager to tell Mark Zuckerberg, Sundar Pichai, or Jack Dorsey how to change their platform policies and algorithms to make the world a better place, but simultaneously failing to answer this: 

What are our responsibilities as citizens in this new reality in which digital and physical, political and commercial, and private and public are seamlessly interwoven?

Today, citizens need new skills for understanding the complex and multidimensional power of digital information and its relationship to democratic society.

AI-powered information ecosystem and new citizenship

Trying to be up-to-date on what’s happening around us is a human condition. Today, everyone is trying to use that condition to catch your attention. And increasingly, AI-powered algorithmic systems decide what kind of information gets through to you

The way information surfaces on your attention has changed. The way you can consume information has changed. The way you can evaluate information has changed. And the way you can react to information has changed. 

Controlling information has always been power—or connected to power. Today’s algorithmically amplified information operations are powered on steroids.

In today’s world election campaigns spend unprecedented amounts of resources on digital platforms, trying to target the right people in the right time in the right place. An information operation in social media can make millions of people take to the streets under the same banner across the globe. Or a social video app of a foreign origin can be used to affect how people participate unpredictably in a local political event.

At the same time, powerful personalized computational propaganda can reach you day and night wherever you are. The people, organizations and machines behind malicious information operations are using, misusing and abusing the current mainstream platforms, such as Facebook, Twitter, Instagram and Youtube, to spread radicalizing material across the globe.

As a result, the way you can manifest your citizenship online and offline has changed. Through your algorithmic information flows and interfaces you have the power to influence—directly and indirectly—on other people’s opinions and choices, on the polls and on the streets.

This fundamental change affects your capacity to use your digital tools and services in an ethical and sustainable way. 

New citizenship skills: data literacy, algorithm literacy and digital media literacy

No single platform or technology can alone solve the socio-technological challenges caused by information operations and computational propaganda. Developing methods against digital propaganda requires international multidisciplinary collaboration among tech companies, academia, societal powers, news media and educational institutions. 

But, to truly get to the bottom of the issue, we need to remember that regardless of the huge power of tech giants or new regulations affecting social media platforms, individuals do have a crucial role in making our digital platforms safer for everyone. 

Importantly, new citizenship skills are required for helping people to act more responsibly on digital platforms.

First, data literacy and algorithm literacy are needed for understanding the basic qualities and effects of data and algorithms that are constantly at work, influencing directly what you see, think and do online and beyond.  

Data literacy lets you assess and observe your data trails and their usage in different systems. Algorithmic literacy gives you a basic idea and awareness on how different AI-powered systems personalize your experience, and how the power of algorithms is used to influence your interpretations, expectations and decision-making. These skills also make you more aware of your own (data) rights in digital platforms. 

Could someone design and develop an engaging tool that would help you in achieving data and algorithm literacy, simultaneously being as frictionless as today’s mainstream social apps?  

Second, up-to-date digital media literacy allows you to make more sense of your feeds that are a continuously changing algorithmic bricolage of serious and entertaining, fact and fiction, news and marketing as well as disinformation and misinformation. 

Digital media literacy enables you to recognize benign and malicious information operations and tell the difference between deliberate spreading of disinformation and unconscious sharing of misinformation. In short, it empowers you to be more thoughtful in reacting to varying information operations that you experience online. 

Importantly, more data-aware, algorithm-informed and digital-media-literate users can demand more sustainable and ethical choices from their digital platforms. At the same time, we need a new practice of citizen experience design that brings citizen-centric thinking and values into the very core of AI design and development. 

It’s time to start talking more seriously and thoughtfully about the responsibilities of individuals, and the new citizenship skills that are required in today’s social media and tech platforms. In the long run, these new emerging citizenship skills will be crucial for the democratic societies across the globe. 

Published February 25, 2021 — 22:00 UTC

Covid 19

How to shift your AI focus from accuracy to value

“All models are wrong, but some are useful.” This is a famous quote from 20th century statistical thinker, George E.P. Box.

This might seem like a strange message- shouldn’t all the models we build be as correct as possible? However, as a data scientist myself, I see great wisdom in this statement. After all, businesses don’t buy AI for model accuracy, they buy it to drive business value. Many businesses are investing in AI today without fully realizing its potential to deliver business impact. It’s time to shift the conversation.

The problem? Most groups embark on building their AI solution by discussing what they want to predict, and quickly shift to a discussion about model accuracy. This strategy often leads data scientists into the doldrums of model metrics that have no connection to business KPIs. Instead, we must focus on desired business outcomes, and what actions AI can prescribe we take in order to achieve those goals.

Let’s illustrate this with an example of a software company. The accounts receivable team of this company might use AI to predict if an invoice will be paid on time. In isolation this prediction has limited business value — an accurate prediction of each customer paying on time doesn’t quite meet the goal of shrinking the cash revenue cycle. Instead, this team should think about the AI solution holistically: how can they align their prediction with key recommendations and actions that will help the user focus their time. 

So how do we achieve this? We need to break down the silos between business leaders and data scientists. Critically, let’s get business leaders and data scientists to follow four key pillars together, which will align organizations around a smarter, core approach:

  • MEASURE the KPI. What is the business outcome we are tracking and used as the measure to track impact for your model?
  • INTERVENE based on what the AI prescribes. What organizational levers and limitations exist and how can your AI provide guidance?
  • EXPERIMENT to measure impact. Build models and deploy these in controlled experiments to attribute impact to the use of AI.
  • ITERATE by constantly monitoring, optimizing and experimenting. Data changes, opportunities arise, no model lives in perpetuity.

These four pillars will help data scientists surface more valuable questions to their business counterparts and give said business leaders a deeper understanding of the power of AI within the organization. Too often it’s difficult or time consuming for technologists to educate their business counterparts on AI or ask them why a particular predictive model is being suggested. AI can be more than the datasets that power it. Adopting these four pillars and having honest conversations early and often can lead to more agility and resilience — critical as local to global events shift the business landscape around us, from temporary anomalies to black swan events.

Let’s return to the business outcome we were discussing – receiving payment on invoices.

Typically, businesses will build a predictive model to flag which customers may be at risk of not paying on time. But if we focus on a better way of measuring impact, we’d turn that predictive flag into a prescriptive solution and train the model to increase the expected revenue received within 30 days of sending the invoice.

Today, the staff in accounts receivable may have several tools at their disposal to ensure that payment is collected within 30 days. Each have their own effectiveness, from phone calls to email nudges, automatic payment suggestions or texts about suspending service. Staff can choose from any number of these actions in order to try to hit a target, however they may be constrained on where to spend their time. A model that predicts outcomes alone fall short of helping the staff choose what action to take. Instead, try building models that predict outcomes given those interventions, therefore influencing actions that yield optimal results.

We’ve now turned our predictive flag program into prescribed interventions. Models are not meant to be static, however, and so running tests, tracking real-time interactions, getting access to temporal data (in sequence) and monitoring your KPI is critical step to making sure your models don’t crumble when facing unforeseen events. Models will not live in perpetuity, so be agile and know how to deploy new models. Iteration is not only about fixing problem; it is also about opportunity. Yes, it will let you respond quickly to problems like data drift, but it also lets you experiment – constantly moving your business forward.

This mind shift from predictive to prescriptive is a natural evolution in how we understand and harness AI within their business. And it is more important in today’s highly-unpredictable economic and competitive business climate, where the ability to make real-time decisions and quickly deliver value can separate the winners from the losers.

Published February 25, 2021 — 23:03 UTC

Covid 19

Twitter’s new ‘Communities’ and ‘Super Follows’ will make it more like Facebook and Patreon

At its core, Twitter hasn’t changed all that much in the past few years. For the most part, people use it the same as they always have — a mostly public-facing ‘microblogging’ platform. But the company today unveiled highlighted multiple upcoming features that could significantly change the way people interact with one another on the platform, in many ways making it more versatile — and more like some of its competitors.

The announcements came as part of Twitter’s ‘Analyst Day.’ here are some of the biggest ones.


Communities is essentially the Twitter version of Facebook Groups. It allows Twitter users to hubs where they can gather based on common interests or locations, an extension of the company’s current topics feature.

Per Twitter, Communites will make it “easier for people to form, discover, and participate in conversations that are more targeted to the relevant communities or geographies they’re interested in.” Twitter showed off some hypothetical groups around social justice, plants, cats, and surfing.

Super Follows

A litter Patreon, and a little Twitch, Super Follows allows Twitter users to, well, become ‘super followers’ of their favorite online accounts. Some of the exclusive perks Twitter is teasing include exclusive content and newsletters, discounts, supporter badges, and super-followers-only conversations.

It could help creators monetize their Twitter following, without asking people to leave the platform. The company also teased some kind of tipping feature for creators, but did not provide much details about how it would work. It did have a $4.99 /month subscription price in a mockup though.

These new features aside, Twitter also highlighted a couple of upcoming features that have been making the rounds the last few months.


Instead of removing character limits, Revue is a way for Twitter users to publish newsletters for their audiences — these can be free or be behind a paywall. Finally, there’ll be a place for you stick your lengthiest tweetstorms.

The feature was technically announced last month, as it comes after Twitter acquired a company named, you guessed it, Revue.


This one isn’t totally new for people who follow social media closely, but it’s essentially Twitter’s version of Clubhouse. In other words, it’s a place where you can actually talk to people using honest-to-goodness audio, although it features live AI captions for conversations as well.

The company has been testing Spaces since at least December, and the feature even has its own Twitter profile.

There’s no word on exactly when these features will land, but considering some of them are already being tested publicly, I’d guess sooner rather than later.

Read next: You SHOULD wave at the end of video calls — here’s why

Covid 19

LA’s light rail extension could help revitalize neighborhoods and improve air quality

This article was originally published by Christopher Carey on Cities Today, the leading news platform on urban mobility and innovation, reaching an international audience of city leaders. For the latest updates follow Cities Today on Twitter, Facebook, LinkedIn, Instagram, and YouTube, or sign up for Cities Today News.

The Federal Transit Administration (FTA) has granted the Los Angeles County Metropolitan Transportation Authority (Metro) a Record of Decision for its East San Fernando Valley light rail transit project, certifying that the scheme has satisfied federal guidelines for environmental analysis.

The decision paves the way for the authority to seek federal funding for the design and construction of the 14.8-kilometer project, which will connect the Van Nuys Metro “G” (Orange) Line Station with the Sylmar/San Fernando Metrolink Station.

“The East San Fernando Valley Light Rail project has been one of my top transportation priorities since I was elected to the City Council,” said Metro Director Paul Krekorian.

“This critical backbone project will be the first light rail line in the Valley, connecting communities, revitalizing neighborhoods, reducing congestion, and improving air quality. Last month we pushed the project forward with a US$30 million (£21.7 million) investment in utility work to expedite construction.

“Now, with the Federal Transportation Authority’s Record of Decision, this line becomes eligible for federal funding opportunities, and we are well on our way toward full funding and completion of the foundation for the future of transit in the San Fernando Valley.”

[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]

The light rail line will travel primarily along Van Nuys Boulevard — one of the Valley’s most heavily traveled corridors. Metro received the State of California’s environmental clearance for the project last December.

With an end-to-end travel time of 31 minutes, daily boardings are anticipated to exceed 30,000 by 2040.

First/last mile

Metro has also developed a first/last mile plan for the project that identifies improvements to make it safer to walk and bike to and from the 14 planned transit stations.

The authority will work with the City of Los Angeles to identify an improved first/last mile parallel bike route to replace existing bike lanes on Van Nuys Boulevard that would be removed by the project in Panorama City and Pacoima.

“The East San Fernando Valley Light Rail Transit Project is just one of several major transportation improvements we have in store for the San Fernando Valley,” said Metro CEO Phillip A. Washington. “It just happens to be the first one to go into construction as we deliver on our promise of better mobility for Valley residents.”

The project will officially begin major construction in 2022 and is scheduled to open by 2028.

SHIFT is brought to you by Polestar. It’s time to accelerate the shift to sustainable mobility. That is why Polestar combines electric driving with cutting-edge design and thrilling performance. Find out how.

Published February 26, 2021 — 12:00 UTC