Crunchyroll, the popular anime streaming service, may soon have a new owner. According to a report by Nikkei, Sony is close to finalizing a deal to buy the service for about “more than 100 billion yen,” or about 955 million US dollars.
AT&T is Crunchyroll’s current parent company, having bought it in 2018. Though Crunchyroll isn’t quite the size of Hulu or Netflix, at about 70 million free subscribers (3 million paying) its specialized content could help Sony build strong foundations with a loyal audience. The company has already seen major success with the release of the animated Demon Slayermovie, which became the highest-grossing movie in Japan.
Anime’s clout is also evidenced by Netflix’s increasing forays into the category. And as noted by Nikkei, the anime market has been steadily increasing, worth about $21 billion globally in 2018, 1.5 times of what it was five years before that.
Hardware aside, Sony is of course already an entertainment mogul with its movies, games, and music; acquiring Crunchyroll would significantly expand its market in the TV arena. Besides, it makes some sense for a Japanese company to own a service dedicated so heavily to anime.
In September, the International Space Station had to dodge an unknown piece of debris. With the volume of space trash rapidly growing, the chances of a collision are increasing.
The European Space Agency (ESA) wants to clean up some of the mess — with the help of AI. In 2025, it plans to launch the world’s first debris-removing space mission: ClearSpace-1.
The technology is being developed by Swiss startup ClearSpace, a spin-off from the Ecole Polytechnique Fédérale de Lausanne (EPFL). Their removal target is the now-obsolete Vespa Upper Part, a 100 kg payload adaptor orbiting 660 km above the Earth.
ClearSpace-1 will use an AI-powered camera to find the debris. Its robotic arms will then grab the object and drag it back to the atmosphere before burning it up.
“A central focus is to develop deep learning algorithms to reliably estimate the 6D pose (three rotations and three translations) of the target from video-sequences even though images taken in space are difficult,” said Mathieu Salzmann, an EPFL scientist spearheading the project. “They can be over- or under-exposed with many mirror-like surfaces.”
Vespa hasn’t been seen for seven years, so EPFL will use a database of synthetic images to simulate its current appearance as training material for the algorithms.
Once the mission begins, the researchers will capture real-life pictures from beyond the Earth’s atmosphere to finetune the AI system. The algorithms also need to be transferred to a dedicated hardware platform onboard the capture satellite.
“Since motion in space is well behaved, the pose estimation algorithms can fill the gaps between recognitions spaced one second apart, alleviating the computational pressure,” said Professor David Atienza, head of ESL.
“However, to ensure that they can autonomously cope with all the uncertainties in the mission, the algorithms are so complex that their implementation requires squeezing out all the performance from the platform resources.”
If the capture is successful, it could pave the way for further debris-removal missions that can make space a safer place.
HP’s Envy 15 laptop for 2020 is at once a practical and beautiful option for people who find themselves doing more than just running a browser and streaming music. The machine offers a bunch of configurations aimed at content creators — folks who work on imaging, audio, and video production — including powerful Intel 10th-gen chips, a selection of graphics processing hardware ranging from Intel UHD to the Nvidia RTX 2060, and an array of display options.
At the same time, it manages to look discrete and professional, with a silver aluminum finish across the all-metal body, and a polished chrome badge on the lid. And while it’s not exactly a steal, the price (Rs. 150,000 in India, with a similar model at $1,250 in the US) feels about right for what you get.
I spent a couple of weeks with the 15-ep0123TX, which appears to be available only in India; different configurations can be found in other countries, but the plot remains the same. Here are the specs of the model I tested.
10th-gen Intel Core i7-10750H (2.6 GHz, up to 5 GHz, 6 cores)
15.6-inch 1080p IPS display, rated at 300 nits brightness, and 72% NTSC
Fingerprint reader for Windows Hello, microSD media card reader, camera shutter and mic mute keys
83Wh 6-cell battery
Depending on where you’re shopping, you can also find variants with a 4K OLED display, Intel Optane Memory in addition to the SSD, and more powerful graphics cards. Here’s what it’s like to use the Envy 15 for creative work, and a bit of play.
What’s great about the Envy 15?
I love the way this laptop looks, with its classy branding, matte finished metal exterior, narrow bezels around the display, and the minimalist design of the keyboard deck. I especially like how the simple white backlit keys look, along with the speaker grilles on either side, and nothing else cluttering up the deck.
Speaking of the keyboard, it might be among my favorite laptop keyboard I’ve tried in the last couple of years. I found myself typing briskly and accurately on the Envy 15’s chiclet keys; the lack of key travel surprisingly didn’t bother me in the least. That said, I’ll acknowledge this might have a lot to do with how I type, and my preference towards ‘speedier’ keyboards with less travel.
The body is fairly compact for a 15-incher with a reasonably powerful graphics card and necessary heat outlets. It easily slipped into a bunch of my bags and backpacks designed for this size, which is notable considering that many other similarly specced laptops can be too bulky to fit.
The Envy 15 is indeed a capable performer; the configuration I tested breezed through productivity apps and a browser loaded with tabs, and made light work of image editing with Photoshop, and music production sessions with Ableton Live. The screen is decent at reproducing colors accurately given the laptop’s price point, though it could be brighter.
The mid-range 1660 Ti Max-Q card helped run several recent games commendably; with titles like Remnant: From the Ashes, it managed consistent frame rates (up to the 1080p display’s limit of 60 fps) without getting too hot or loud. With more graphics-intensive titles, you’ll naturally have to turn down a parameter or two to keep things running smoothly — but it’s adequate for casual gamers.
HP claims you can somehow get up to 16.5 hours of use on a single charge; I didn’t get anywhere near that with my workloads, but I wasn’t expecting to, either. Photoshop, Slack, Chrome, and Ableton Live all like to chew through batteries, and even more so if you’ve got the brightness turned up to medium-high. I got about five hours without a fuss, which is decent for this sort of machine, and as the company claimed, the battery charged up to 50% in roughly 45 minutes.
What’s not so great?
With a starting weight of 2.14 kg, the Envy 15 isn’t something you’ll want to lug around in a strapless sleeve. The charging brick is also roughly the same size and weight as those you get with hefty gaming laptops, and there’s no support for charging with a USB-C adaptor, so the package isn’t as portable as I’d like.
You may not appreciate the power key being nestled in alongside the other keys. It is hard to find when you want to get started in a hurry, and there’s a chance you might hit it by accident.
I faced a couple of odd issues, which might just have something to do with this particular test machine, but they should have been easier to sort. HP’s Command Center app for adjusting power modes was simply nowhere to be found (despite there being a physical key to launch it), and the package I installed from the site didn’t help either.
The webcam was also rather finicky; it didn’t work automatically with Google Meet and some other conferencing apps. I got it to work with some fiddling, but I couldn’t figure out exactly what was causing the problem. Turning it off and back on again with the shortcut key didn’t immediately bring it back to life. The mic mute also didn’t do anything, and I wonder if this has something to do with the Command Center software that was AWOL. I can only hope this is a one-off issue with this device, and can be addressed by HP via software fixes.
Should you consider the Envy 15?
The Envy 15 should find fans among folks who need a fair bit of computing and graphics power under the hood, but don’t want a bulky or overdesigned machine.
It’s reasonably priced at Rs. 150,000 ($2,109) in India, but that’s about it. It’s not a great deal in terms of value for money, you just get what you pay for. Laptops are just generally priced higher in India than in the US, and you see that playing out here too.
This particular model with a 1660 Ti graphics card isn’t available in the US, but one with a 1650 comes in at $1,250, and so do similarly specced models from Lenovo and MSI. So I’d recommend looking for sales or discounts on this wherever you plan to buy it.
If you have a product webpage, you’d wish that you could create a handy video to show to clients with all the info. However, you might not have the budget to hire someone to make it. To solve that, Google’s AI team is working on a future solution that automatically converts webpages into videos.
Google’s URL2Video tool helps you convert your website into a short video if you specify the constraints of the output video, such as the duration and aspect ratio. The tool tries to maintain the design language of the source page and uses its elements such as the text, images, and clips to create a new video.
To train the model, the company interviewed designers to determine important aspects of a webpage derived from heuristics. Based on these parameters, the tool analyzes a page and ranks key elements. Then based on conditions provided by the users, it selects top elements such as headers and images to churn out a video.
Once the video is generated, you can change colors and style to regenerate the footage according to your needs. You can see an example of the tool being used to create a video for Google Search below.
Google’s not the only company working on this. Baidu has built and rolled out an AI in the limited capacity that can generate news videos with voice over using a single URL.
In the future, Google’s AI team wants to add the ability to add audio tracks and voiceovers during the editing process.
You can read about the tool’s technical details in this paper.
This article originally featured in Byte Me, our monthly feminist newsletter that makes everyone mad. In each edition, we choose a “word of the month” that we’ve either made up or found on the internet. Subscribe here to get it straight to your inbox!
Remember when incels would mostly reside in the dark, stinky corners of Reddit and 4chan? They’ve now conquered your father’s favorite social network, LinkedIn — giving birth to an awful new thing:
To quickly refresh your memory, incels are ‘involuntary celibates,’ people (let’s face it, mainly men) who believe they can’t get laid because they’re not attractive enough. Most of themhate women. Some of themshoot up shopping malls.
An incel with a LinkedIn profile might not take it that far — they’re focused on their careers, after all — but that doesn’t mean they can’t hit on random women via Linkedin messages.
In arecent piece for Fast Company, author Katie Fiore signals a disturbing new trend on the social platform: women users are increasingly dealing with harassment and misogyny.
Fiore researched the reactions to articles posted by The Female Lead, an online platform that focuses on female leadership. She noted that:
…many of the comments were filled with derision, marginalization, and even outright hate directed at the female subject of the post. Comments from professional men whose pictures, names, and places of work were visible for everyone to see. Men who felt so comfortable with their misogyny that they were empowered to share it on a platform for professionals designed to help us advance our networks and careers.
And the misogyny on LinkedIn isn’t just hiding in comments and DMs — it’s also in the content itself. Last week, this post landed on my feed:
In addition to being regressive and demeaning — which, for the record, is pointed out by many readers in the comments — why is this post even on LinkedIn?
It’s pretty ironic to suggest that all women should devote their lives to homemaking; the one job you won’t find on LinkedIn. Also, what the hell are safety pads, and why would women need men to buy them?
Before publishing new posts, users will be reminded to respect the guidelines and keep it professional. Direct messages that are deemed inappropriate by LinkedIn’s algorithm will show up with a warning:
While such a feature lowers the barrier for users to report unwanted advances, LinkedIn is unclear about what will happen after reporting; just that it will take “appropriate actions.”
And, as Fiore states in her Fast Company article, these tools don’t seem to acknowledge the underlying issue: that it’s mainly women who need to deal with this crap. They’re on LinkedIn to further their professional careers, and now they have to become full-time unpaid moderators, too?
So, a closing note to the men: stop using LinkedIn to hit on women. If you come across a pretty profile picture, no need to tell her about it. And please,please don’t send unsoliciteddick pics via inMail. Nobody endorsed you for being a pathetic little pervert.
One of the things that caught my eye at Nvidia’s flagship event, the GPU Technology Conference (GTC), was Maxine, a platform that leverages artificial intelligence to improve the quality and experience of video-conferencing applications in real-time.
Maxine used deep learning for resolution improvement, background noise reduction, video compression, face alignment, and real-time translation and transcription.
In this post, which marks the first installation of our “deconstructing artificial intelligence” series, we will take a look at how some of these features work and how they tie-in with AI research done at Nvidia. We’ll also explore the pending issues and the possible business model for Nvidia’s AI-powered video-conferencing platform.
Super-resolution with neural networks
The first feature shown in the Maxine presentation is “super resolution,” which according to Nvidia, “can convert lower resolutions to higher resolution videos in real time.” Super resolution enables video-conference callers to send lo-res video streams and have them upscaled at the server. This reduces the bandwidth requirement of video conference applications and can make their performance more stable in areas where network connectivity is not very stable.
The big challenge of upscaling visual data is filling in the missing information. You have a limited array of pixels that represent an image, and you want to expand it to a larger canvas that contains many more pixels. How do you decide what color values those new pixels get?
Old upscaling techniques use different interpolation methods (bicubic, lanczos, etc.) to fill the space between pixels. These techniques are too general and might provide mixed results in different types of images and backgrounds.
One of the benefits of machine learning algorithms is that they can be tuned to perform very specific tasks. For instance, a deep neural network can be trained on scaled-down video frames grabbed from video conference streams and their corresponding hi-res original images. With enough examples, the neural network will tune its parameters to the general features found in video-conference visual data (mostly faces) and will be able to provide a better low- to hi-res conversion than general-purpose upscaling algorithms. In general, the more narrow the domain, the better the chances of the neural network to converging on a very high accuracy performance.
There’s already a solid body of research on using artificial neural networks for upscaling visual data, including a 2017 Nvidia paper that discusses general super resolution with deep neural networks. With video-conferencing being a very specialized case, a well-trained neural network is bound to perform even better than more general tasks. Aside from video conferencing, there are applications for this technology in other areas, such as the film industry, which uses deep learning to remaster old videos to higher quality.
Video compression with neural networks
One of the more interesting parts of the Maxine presentation was the AI video compression feature. The video posted on Nvidia’s YouTube shows that using neural networks to compress video streams reduces bandwidth from ~97 KB/frame to ~0.12 KB/frame, which is a bit exaggerated, as users have pointed out on Reddit. Nvidia’s website states developers can reduce bandwidth use down to “one-tenth of the bandwidth needed for the H.264 video compression standard,” which is a much more reasonable—and still impressive—figure.
How does Nvidia’s AI achieve such impressive compression rates? A blog post on Nvidia’s website provides more detail on how the technology works. A neural network extracts and encodes the locations of key facial features of the user for each frame, which is much more efficient than compressing pixel and color data. The encoded data is then passed on to a generative adversarial network along with a reference video frame captured at the beginning of the session. The GAN is trained to reconstruct the new image by projecting the facial features onto the reference frame.
The work builds up on previous GAN research done at Nvidia, which mapped rough sketches to rich, detailed images and drawings.
The AI video compression shows once again how narrow domains provide excellent settings for the use of deep learning algorithms.
Face realignment with deep learning
The face alignment feature readjusts the angle of users’ faces to make it appear as if they’re looking directly at the camera. This is a problem that is very common in video conferencing because people tend to look at the faces of others on the screen rather than gaze at the camera.
Although there isn’t much detail about how this works, the blog post mentions that they use GANs. It’s not hard to see how this feature can be bundled with the AI compression/decompression technology. Nvidia has already done extensive research on landmark detection and encoding, including the extraction of facial features and gaze direction at different angles. The encodings can be fed to the same GAN that projects the facial features onto the reference image and let it do the rest.
Where does Maxine run its deep learning models?
There are a lot of other neat features in Maxine, including the integration with JARVIS, Nvidia’s conversational AI platform. Getting into all of that would be beyond the scope of this article.
But some technical issues remain to be resolved. For instance, one issue is how much of Maxine’s functionalities will run on cloud servers and how much of it on user devices. In response to a query from TechTalks, a spokesperson for Nvidia said, “NVIDIA Maxine is designed to execute the AI features in the cloud so that every user access them, regardless of the device they’re using.”
This makes sense for some of the features such as super resolution, virtual background, auto-frame, and noise reduction. But it seems pointless for others. Take, for example, the AI video compression example. Ideally, the neural network doing the facial expression encoding must run on the sender’s device, and the GAN that reconstruct the video frame must run on the receiver’s device. If all these functions are being carried out on servers, there would be no bandwidth savings, because users would send and receive full frames instead of the much lighter facial expression encodings.
Ideally, there should be some sort of configuration that allows users to choose the right balance between local and on-cloud AI inference to strike the right balance between network and compute availabilities. For instance, a user who has a workstation with a strong GPU card might want to run all deep learning models on their computer in exchange for lower bandwidth usage or cost savings. On the other hand, a user joining a conference from a mobile device with low processing power would forgo the local AI compression and defer virtual background and noise reduction to the Maxine server.
What is Maxine’s business model?
With the covid-19 pandemic pushing companies to implement remote-working protocols, it seems as good a time as any to market video-conferencing apps. And with AI still being in the climax of its hype season, companies have a tendency to rebrand their products as “AI-powered” to improve sales. So, I’m generally a bit skeptical about anything that has “video conferencing” and “AI” in its name these days, and I think many of them will not live up to the promise.
But I have a few reasons to believe Nvidia’s Maxine will succeed where others fail. First, Nvidia has a track record of doing reliable deep learning research, especially in computer vision and more recently in natural language processing. The company also has the infrastructure and financial means to continue to develop and improve its AI models and make them available to its customers. Nvidia’s GPU servers and its partnerships with cloud providers will enable it to scale as its customer base grows. And its recent acquisition of mobile chipmaker ARM will put it in a suitable position to move some of these AI capabilities to the edge (maybe a Maxine-powered video-conferencing camera in the future?).
Finally, Maxine is an ideal example of narrow AI being put to good use. As opposed to computer vision applications that try to address a wide range of issues, all of Maxine’s features are tailored for a special setting: a person talking to a camera. As various experiments have shown, even the most advanced deep learning algorithms lose their accuracy and stability as their problem domain expands. Reciprocally, neural networks are more likely to capture the real data distribution as its problem domain becomes narrower.
But as we’ve seen on these pages before, there’s a huge difference between an interesting piece of technology that works and one that has a successful business model.
Maxine is currently in early access mode, so a lot of things might change in the future. For the moment, Nvidia plans to make it available as an SDK and a set of APIs hosed on Nvidia’s servers that developers can integrate into their video-conferencing applications. Corporate video conferencing already has two big players, Teams and Zoom. Teams already has plenty of AI-powered features and it wouldn’t be hard for Microsoft to add some of the functionalities Maxine offers.
What will be the final pricing model for Maxine? Will the benefits provided by the bandwidth savings be enough to justify those costs? Will there be incentives for large players such as Zoom and Microsoft teams to partner with Nvidia, or will they add their own versions of the same features? Will Nvidia continue with the SDK/API model or develop its own standalone video-conferencing platform? Nvidia will have to answer these and many other questions as developers explore its new AI-powered video-conferencing platform.
This article was originally published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.
One of my favorite things about being a Google Fi subscriber is the service’s built-in VPN service; it’s a neat added layer of security, especially when connected to public Wi-Fi. I’ve also wondered why Google hasn’t made this service available more widely — but no more.
Google today announced it’s bringing its VPN service to Google One subscribers on the 2TB plan or higher — the ones starting at $10 a month. This goes for families too — you can share VPN access with up to 5 family members at no extra cost.
For those not familiar, VPNs allow you to browse the web more securely by encrypting your connection and hiding your IP address. Your data goes through a VPN server before it hits the wider web, where it’s encrypted against potential hackers or others looking to track your information.
Google is also offering one-on-one ‘Pro Sessions‘ for users to “learn about VPNs and how to stay safer online.” These appointments are available on a first-come, first-serve basis.
Of course, there are plenty of other VPN services out there, and Google’s doesn’t seem to be particularly exceptional or unique — at the moment, it seems to be more about convenience for Google One subscribers than anything else.
Unfortunately, VPN access will only be available for Android users to start(in the coming weeks), but the company plans to expand the service to iOS, Windows, and Mac in the coming months.
For more gear, gadget, and hardware news and reviews, follow Plugged on Twitter and Flipboard.
Two years after Xiaomi took the top spot in the Indian smartphone market, Samsung has finally overtaken it this quarter, according to a new report by research firm Counterpoint.
Buoyed by rising “anti-China sentiment” among Indian consumers and by launching a barrage of new models in the country, Samsung managed to gain a 24% share in the market for Q3 2020. Xiaomi slipped down to the second spot with a 23% share, followed by Vivo and Realme.
Samsung launched 18 phone variants in the market from April to September as compared to 22 by Xiaomi, which also includes phones from Poco. However, Counterpoint’s report suggests that Samsung’s aggressive online push and effective distribution did the trick. Xiaomi‘s sales slumped due to the manufacturing and distribution gap created by the coronavirus pandemic.
Counterpoint’s research analyst, Shilpi Jain, said that there were some anti-China feelings in consumers after a skirmish between armies of two countries at the border in June:
During the start of the quarter, we witnessed some anti-China consumer sentiments impacting sales of brands originating from China. However, these sentiments have subsided as consumers are weighing in different parameters during the purchase as well. The brands have been quite aggressive as they started building up inventory much ahead of the festive season.
Indian phone maker Micromax, which held the first position in the Indian market at one point, is aiming to take advantage of that “anti-China sentiment” and make a comeback with a new series called “in” next month.
The research firm said that overall, it was a strong quarter as more than 53 million units were shipped with a 9% year-on-year growth. It also noted that Samsung’s lead is not definitive, and Xiaomi could easily make a comeback in the next quarter to claim the top spot again.
For more gear, gadget, and hardware news and reviews, follow Plugged on Twitter and Flipboard.
Man, AMD is on a roll. It was just a few years ago that the company seemed destined to fade into relative obscurity in fields dominated by Intel and Nvidia. Now in 2020, the company is managing to trade blows with its heavyweight competitors in two different markets. Case in point: the company today revealed the Radeon RX 6000 series, specced and priced to offer a real challenge to Nvidia’s RTX 3000 cards — and potentially save you some money.
That’s no small feat, considering the fact that AMD was behind the previous generation and that Nvidia’s new cards are one of its biggest generational leaps ever. Yet if AMD’s performance metrics are to be believed, the RX 6800 ($579), 6800 XT($649), and 6900 XT($999) offer real competition to the RTX 3070 ($500), 3080($700), and 3090($1,500), respectively. The RX 6900 XT in particular looks like a fantastic deal.
AMD claims its new RDNA 2 technology offersup to twice the performance of the prior generation while drawing roughly the same amount of power (and less than the Nvidia equivalents). This is despite the fact that the company is using the same 7-nanometer transistor size as in its previous cards.
One source of the improvement is something called “Infinity Cache,” essentially 128MB of on-die cache to reduce latency in GPU processing. The company isn’t providing many other specifics on how it’s achieving these gains, but hey, we’ll take ’em.
Ray-tracing has been a huge part of Nvidia’s graphics push the last few years, so the new cards support accelerated DirectX raytracing on each compute unit for up to ten times faster performance compared to software-based raytracing. The cards also support AMD‘s FidelityFX, an open-source toolkit for developers to easily add high-quality lighting effects.
Variable Rate Shading, for instance, allows developers to spend fewer processing resources on areas of the image that aren’t as important, while a denoiser should make for cleaner ray-traced lighting. It also has something called ‘Super Resolution,’ which appears to be the company’s analog for Nvidia’s DLSS upscaling technology.
AMD also has a subtler advantage: its technology is used in the PS5 and Xbox Series. That should give the company a leg-up on cross-platform titles.
Here are some slides AMD provided comparing its cards against Nvidia’s. Its likely AMD chose games where its cards will be more performant, but its impressive to see the cards go toe-to-toe nonetheless. Here’s the RX 6800 against the RTX 2080 Ti, which is roughly equivalent to the RTX 3070:
Here’s how the RX 6800 XT compares to the RTX 3080. In this chart, AMD also shows off something called ‘smart access memory,’ a proprietary technology to allow computers running the latest Ryzen processors to directly access the GPUs memory:
And lastly, here’s the RX 6900 compared to the RTX 3090:
It’s the first time in a long time that team red has had flagship graphics cards that appear to be fully on par with Nvidia’s — while apparently drawing less power too.
The proof is in the pudding though, so we’ll learn more once the cards can be benchmarked by independent users and publications. The RX 6800 and 6800 XT will be available starting November 18, while the 6900 XT arrives December 8.
If you’ve got 25 minutes to kill, you can watch AMD’s full announcement below:
For more gear, gadget, and hardware news and reviews, follow Plugged on Twitter and Flipboard.
I enjoy gaming as much as the next nerd, but I’m not about that RGB life, or one for edgy industrial design in my hardware. Thankfully, a few PC and peripheral brands have begun accommodating people like me lately, and I’m glad to see Lenovo get in line with its Legion range of gaming laptops, including the all-powerful Legion 7i.
Packed to the gills with top-end components for serious performance, the Legion 7i doesn’t immediately give away the fact that it’s built to obliterate baddies with. Its design allows for commendable cooling, narrow bezels around the screen, a comfortable keyboard, and a webcam above the display where it should be — making for a laptop that I’m happy to use for work and play.
I’ll break down this baby at length, but let me start with the specs of the model I tested.
15.6-inch anti-glare 1080p display with 144Hz refresh rate
Intel Wi-Fi 6
It’s a major upgrade from the Legion Y740, with a notable bump up to a 10th-gen Intel CPU with 8 cores and 16 threads. The 7i also gets a few improvements in the keyboard, and the trackpad is now 39% larger and gets Windows Precision Drivers. The revamped design achieves an 85% screen-to-body ratio, and the screen rests on a 180-degree hinge that makes it easy to open with one hand.
What’s great about the Legion 7i
The 7i is currently among the most powerful gaming laptops you can buy right now, and it’s sensibly appointed for that purpose. The CPU and GPU work great to deliver high frame rates in all the latest titles, with settings cranked to the max or nearly there — including ray tracing for better lighting and reflection effects, thanks to the beefy RTX 2080 Super Max-Q.
The hardware is complemented by a fantastic display that allows for high frame rates (up to 144 Hz) for smooth animations sans screen tearing, thanks to Nvidia G-Sync. It’s also bright and vibrant to boot; Lenovo says it manages 100% of Adobe’s sRGB color gamut, and also supports HDR.
Firing up Forza Horizon 4, and Rage 2, as well as Ray Tracing-enabled titles like Bright Memory, and Ghostrunner, I got frame rates above 100 fps with graphics settings turned all the way up at 1080p. When plugged into my TV via HDMI, most games also ran at that display’s limit of 60fps at 4K resolution, though I did have to turn the odd graphics setting down to keep things running smoothly at that resolution.
The keyboard offers good feedback, and is a delight to type and play on, thanks to well spaced keys. You also get a numpad, and the large trackpad is easy to use as well.
Build quality on the 7i is solid. It gets an aluminium chassis with a sophisticated matte finish that looks sleek on the keyboard deck as well as the exterior. There’s also an oleophobic coating on the deck, which seemed to hold up pretty well over the past few weeks. No, I’m not snacking at my desk, but you usually see spots on the most frequently hit keys on other laptops within just a few days; the 7i still looks clean at the end of my review schedule.
The large-ish vents along the back and sides have RGB lights within; there’salso per-key RGB lighting, as well as lights for the logo on the lid, and a strip along the bottom. Thankfully, you can configure all of it, and set them to a single subtle color, or turn them off if you like.
The speakers just above the keyboard are also fairly decent, and get loud enough to drown out the fans while you’re gaming. As with most laptop speakers, I wouldn’t listen to music on them — But I’ll happily catch a movie or TV show with these speakers in a pinch.
Lastly, the 7i offers enough ports to make this laptop easy to live with. You’ve got a couple of USB-C ports on one side, a USB-A port on the other, and a couple more on the back, along with HDMI and power. We’ve seen a lot of gaming laptops adopt this rear port panel lately, and I’m glad Lenovo followed that playbook here. Oh, and the webcam gets a physical switch to block its view if you’re paranoid about that kind of thing.
What’s not so great about the 7i?
I only have a couple of minor gripes with the 7i, and I imagine these are common to most gaming laptops, especially those with high-end specs. It’s a tad heavy at 2.25 kg (4.96lb) — the exact weight will vary depending on the configuration you choose — and a bit too large to fit into a bunch of bags and backpacks that fit other general-purpose 15-inch laptops.
I’d also have liked to see support for USB-C charging, a smaller power brick, and better battery life for work tasks. The 80Wh battery should last you about 5.5 hours with productivity software (but not graphics-intensive games); but my mileage varied depending on my workload and the combinations of apps I used. Between Chrome with several tabs, Photoshop 2020, Slack, and a couple more Electron-based tools, I got closer to four hours with battery saver settings enabled.
As for keeping the 7i cool while gaming — sadly, even with these large vents and ‘vapor chamber’ cooling, the machine can still run uncomfortably hot when taxed. It’s not a deal-breaker, but this isn’t exactly the gold standard for thermal performance.
Should you consider the Legion 7i?
If you’re in the market for a top-shelf gaming laptop right now, this is an excellent performer that can deliver the goods when it comes to the latest titles.
Depending on the variant you choose and where you shop, you can get a pretty good deal on the 7i that beats out most of the competition with similar specs, like those from Razer and Alienware. In India, this model will set you back by Rs. 249,990 ($3,373), and I couldn’t find anything else with these specs near this price point in the country. Lenovo has a similar model (part number 81YT0005US) priced at $2,750, but it’s currently available at $2,210 — and this one comes with double the RAM and an additional 512GB SSD.