The New Screen Savers 120: Echo, Echo, Echo…

Leo Laporte and Roberto Baldwin try out the new Multi-Room Music feature on the Amazon Echo. We’ll set up several Echos in the studio to set it up and find out how it works. Anker has a new sister brand for smart-home devices called Eufy. One of their latest products is a tiny smart speaker called the Genie. We’ll see how to compares to the Echo Dot. Megan Morrone continues her #DigitalCleanse. This week is number four: Cleaning up your cloud storage. We’re going to revive an old Mac Pro (mid-2010) for our ‘Call for Help,’ and install NVIDIA’s flagship gaming GPU the GeForce GTX 1080 Ti. We’ll take a first look at Tovala’s meal kit service that cooks itself in a smart oven that can steam, bake, and broil. Jason Howell shows us some AR apps on the Asus Zenfone AR. In the ‘Mail Bag,’ we answer questions about what to do with vintage computers and tracking apps for Android.

– Apple to unveil the next iPhone September 12th at the new Apple campus.
– Juicero shuts down and won’t sell juice packs.
– In an email snafu, Essential shared driver’s licenses numbers of some customers.
– Roberto was at the Hyperloop Pod Competition where the winning pod hit over 200MPH.

Advertisements

This Week in Computer Hardware 405: NVIDIA GTX 1080 Ti Madness!

Nvidia’s new GTX 1080 Ti: we’ve got benchmarks, and a review! Does overclocking the Ryzen 7 1700 boost gaming performance? And where are all the Ryzen motherboards, anyhow??? Logitech’s G533 Wireless 7.1 Gaming Headset Review, $105,000 in electrostatic headphones from Sennheiser, HiFiMan, and Mr. Speakers, and some bargain earbuds that sound great!

Nvidia’s upcoming Pascal GPU pictured with HBM 2.0 – TechSpot

Nvidia’s next-generation graphics core, codenamed ‘Pascal’, is expected to launch in the first half of 2016, bringing a large jump in performance that should impress the PC gaming enthusiasts out there. But before the GPU could be publicly detailed by Nvidia, a slide from the company’s GTC Taiwan 2015 presentation has leaked, giving us an early look at the design of the chip.

While we aren’t getting a complete look at a board featuring Nvidia’s Pascal GPU, the image from GTC Taiwan 2015 does show the die closely flanked by HBM 2.0. This second-generation high bandwidth memory technology is expected to give Pascal memory bandwidth in the 1 TB/s range, double that of the HBM 1.0 AMD used with their current-gen Fiji GPUs.

Rumor has it that Nvidia will include a whopping 16 GB of HBM 2.0 with their top-end Pascal products, while the chip itself will feature up to 17 billion transistors. Nvidia will be able to cram that many transistors into a reasonable die size thanks to the use of TSMC’s 16nm FinFET+ manufacturing process, which is an effective die shrink compared to current 28nm technology.

Pascal is expected to support mixed floating point precision as well, going beyond what Kepler and Maxwell support by adding FP16 execution alongside FP32 and FP64. Pascal can allegedly perform FP16 calculations at twice the rate of FP32, so if games are willing to sacrifice precision, Pascal can provide a speed boost.

When Pascal launches in the first half of 2016, it will go head to head with AMD’s upcoming Arctic Islands line, which is also expected to use HBM 2.0 and be built on a 16nm process. 2016 is shaping up to be an exciting year for graphics card launches, where we might finally see significant performance gains over the previous generation.

Source: Nvidia’s upcoming Pascal GPU pictured with HBM 2.0 – TechSpot

AMD and Nvidia get ready for next-gen DirectX 12

Microsoft has yet to launch its next-generation DirectX 12 multimedia API, but AMD and Nvidia are both ready with hardware to support it.

Must See Gallery

For AMD, its Radeon HD 7000 and Radeon R200 series will support the API, while over at Nvidia support will come from the Fermi, Kepler and Maxwell generation of GeForce GPUs.

DirectX 12 isn’t just a small tweak applied to the top of the existing DirectX 11 API; it’s a big revision. It brings to the table a significant number of benefits, as listed by AMD:

Better use of multi-core CPUs

More on-screen detail

Higher min/max/average framerates

Smoother gameplay

More efficient use of GPU hardware

Reduced System power draw

Allows for new game designs previously considered impossible due to restriction by older APIs

Better use of multi-core CPUs is significant. Currently under DirectX 11, no matter how many cores your CPU has, the first core does the majority of the hard work for the API, with the rest of the cores doing very little. Under DirectX 12, the workload will be far better distributed, even across as many as eight cores.

Given that AMD has a number of competitively priced 8-core processors in its line-up, this might give the company an edge over Intel in the gaming arena.

On the Nvidia side of things, there are new multi-sampling shaders, significantly faster geometry shader rendering and ray tracing shadows.

DirectX 12 will also make better use of the GPU resources available to it, which will benefit system with beefier graphics cards. This, in turn, will give those expensive AMD and Nvidia GPUs a little more to do.

This isn’t just about gaming though. It is also going to benefit other emerging areas such as virtual reality because it will allow existing hardware to do more, and work more efficiently.

via AMD and Nvidia get ready for next-gen DirectX 12 | ZDNet.

NVIDIA Bows to Outraged Overclockers, Will Restore Feature in Upcoming Driver

DailyTech - NVIDIA Bows to Outraged Overclockers, Will Restore Feature in Upcoming Driver

Rejoice; overclocking is returning to Maxwell

NVIDIA Corp. (NVDA) faced heavy criticism for bricking unlocking on laptop PCs, via a driver update. The move threatened to alienate some loyal gaming customers, who had purchased laptops from various OEMs which billed overclockability of NVIDIA’s GeForce 970m and 980m GPUs (second generation Maxwell architecture mobile parts) as a selling point. Critics complained not only at NVIDIA’s decision, but at the fact that it seemingly condoned its partner OEMs’ highly visible practice, only to change its mind and kill the “feature” proclaiming it a bug.

Well, lo and behold NVIDIA has changed its tune. In a post to the GeForce Forums, it looked to assuage the increasingly rebellious crowd, promising to reverse the controversial edict. Customer care rep “PeterS” wrote:

As you know, we are constantly tuning and optimizing the performance of your GeForce PC.

We obsess over every possible optimization so that you can enjoy a perfectly stable machine that balances game, thermal, power, and acoustic performance.

Still, many of you enjoy pushing the system even further with overclocking.

Our recent driver update disabled overclocking on some GTX notebooks. We heard from many of you that you would like this feature enabled again. So, we will again be enabling overclocking in our upcoming driver release next month for those affected notebooks.

If you are eager to regain this capability right away, you can also revert back to 344.75.

As one user pointed out, PeterS is not just a volunteer forum moderator for NVIDIA’s GeForce Forums, he’s a full fledged NVIDIA employee

. It is unclear whether “ManuelG” — who originally posted NVIDIA’s proclamation that laptop overclocking was “a bug” was an actual NVIDIA employee. It’s possible that the message got distorted by the time it came from this user, assuming they’re a volunteer moderator.

Read more: DailyTech – NVIDIA Bows to Outraged Overclockers, Will Restore Feature in Upcoming Driver.

Samsung wants the ITC to block Nvidia chips in the US

In the latest round of a patent battle between Nvidia and Samsung, Samsung petitioned the US International Trade Commission to block the sale of Nvidia’s graphics processors in the US, Bloomberg reported on Friday.

The request, according to the ITC, would extend to both Nvidia’s graphics cards and systems on a chip, which means it could conceivably impact both Nividia’s GeForce graphics card line and its Tegra mobile processors.

The back-and-forth between the two companies began in September when Nvidia sued both Samsung and Qualcomm, alleging that the two companies infringed on some of Nvidia’s GPU-related patents. At the time, Nvidia said it requested that the ITC block Samsung Galaxy phones that contained certain chips from Qualcomm, ARM, and Imagination Technologies.

For its part, Samsung struck back against Nvidia with a patent lawsuit of its own earlier this month. In its complaint, Samsung claimed that infringed on six of its patents related to chip design and other technologies.

Why this matters: It may go without saying, but a ban on critical components such as graphics cards and processors could have a ripple effect across the tech industry, as it could affect other companies that use Nvidia’s chips in its products.

Engadget notes that “ITC complaints typically take less time to handle than lawsuits,” and that as a result, “there’s a greater chance that Nvidia and partners will have to yank their products.”

An Nvidia spokesperson told Bloomberg that it plans to lodge its own complaint with the ITC against Samsung, though, so this game of chicken is far from over.

via Samsung wants the ITC to block Nvidia chips in the US | PCWorld.

Like the old days: Why AMD and Nvidia are fighting?

AMD and Nvidia are at it again. The two reigning champs in the market for video game graphics have been fighting since late last month when some performance issues on the PC version of Watch Dogs kicked up a fresh controversy. And given that AMD is still talking about the issue publicly, it doesn’t look like things are going to settle down anytime soon.

Are you one of the people perplexed by all the sound and fury emanating from PC gaming forums? Don’t worry: I am, too. To help us all get up to speed, I prepared a handy guide to the main talking points here.

Have they ever been at peace with one another?

Not really, no. They’re sort of like the Coke vs. Pepsi of video games. That comparison is all the more relevant considering that some of their other competitors, like Intel, have captured a much larger portion of the overall graphics market by appealing to PC users who don’t need to play serious games and thus don’t care as much about spending upwards of $300 for the best graphics card imaginable. Something similar happened when Pepsi and Coke locked horns so intensely that they didn’t notice other, smaller competitors had started making little things called energy drinks.

Is there a substantial difference between their cards?

It depends on who you ask. Last year when we polled our readers, the Kotaku community seemed to overwhelmingly favor Nvidia cards. That doesn’t say anything about performance, mind you—just people’s preferences. But market share could be a significant issue here, since Nvidia has been beating out its closest competitor specifically in the PC realm in recent years. Here’s quick description of Nvidia’s current, enviable position from the financial site The Motley Fool:

NVIDIA has benefited from the growing PC gaming market, with revenue from its GeForce gaming GPUs rising by 15% in fiscal 2014. This growth came during a continuing decline in the PC market as a whole, with NVIDIA specializing in one of the few areas that have remained immune to the PC sales slump. NVIDIA’s share of the discrete GPU market has also been on the rise, with the company now commanding around 65% of the market. NVIDIA was nearly even with rival AMD back in 2010 in terms of market share, but the gap has been widening each year.

What does that have to do with anything?

Well, each company’s influence in the PC gaming market rises and falls depending on the worth that individual game developers give to it. So if a company like, say, Ubisoft thinks that it should form some special partnership with Nvidia because lots of PC gamers use its cards over AMD tech, the company’s executives would probably feel more inclined to form such a special partnership if they were convinced that keeping Nvidia happy would guarantee them the rapt attention of 65 percent of PC gamers.

Full Story: Like the old days: Why AMD and Nvidia are fighting? – TechSpot.