Connect with us


Microsoft’s quantum cloud computing plans take another big step forward

The Redmond giant has expanded Azure Quantum to the wider ecosystem.



Azure Quantum, the public cloud ecosystem dedicated to quantum applications developed by Microsoft, is now available for public preview. The Redmond giant has urged developers and researchers in the field to start using the platform’s cloud services to explore, build and test applications of quantum technologies that could transform a wide range of industries.

Since Microsoft’s Build event last year, Azure Quantum has been in limited preview, and developers from select companies have been piloting the platform for the past few months. Experiments have been carried out in many different fields, including materials design, financial modelling and traffic optimization.

“With Azure Quantum Public Preview, we’re opening up the technology to the broader ecosystem,” Julie Love, senior director at Microsoft Quantum, told ZDNet. “This means that developers, researchers, systems integrators, and customers can use it to learn and build.”

Azure Quantum aims to create a one-stop shop for developers, complete with the software and hardware resources that are necessary to build quantum applications.

Quantum computing is based on different building blocks than classical computing. While classical bits can only hold a single value of either zero or one, quantum bits – or qubits – can be programmed to hold multiple values at the same time. Leveraging this particular characteristic of qubits, quantum computers can solve problems exponentially faster than classical computers, although quantum devices are still in their infancy

Azure Quantum’s ecosystem also comes with software packages to help developers get started with writing quantum applications. Among them, an open-source quantum development kit (QDK) provides a basis for researchers to develop new algorithms with Q#, a quantum-focused programming language.

Researchers can use the QDK to develop and test new quantum algorithms, to run small examples on a simulator, or estimate resource requirements to run simulations at scale on future quantum computers. QDK’s GitHub repository also includes open-source Q# libraries and samples that can be used to build quantum computing applications.

“Quantum computing research is enabled in Azure Quantum by a rich set of tools ranging from the QDK and the Q# programming language for quantum,” said Love. “The Q# programming language is a high-level modern language that promises long-lasting, durable code, meaning that your code will work across different types of quantum hardware and on future quantum systems.”

Microsoft has started working on quantum applications in chemistry, and recently published some research on using quantum computers to design a catalyst that could take carbon out of the atmosphere. Early trials of Azure Quantum also saw Microsoft collaborating with materials science company Dow to build a quantum representation of a chemistry problem using the Q# language.

The quantum devices that are currently available can only support a small number of qubits, meaning that the quantum algorithms that are built today on Microsoft’s quantum platform are designed to tackle small-scale problems with little business relevance. But as Love explains, the point of Azure Quantum is rather to fiddle around with quantum capabilities, to lay the groundwork in anticipation of improved hardware to come.

“These applications in quantum computing hold the promise to solve some of our planet’s toughest challenges – in energy, climate, materials, agriculture, healthcare and more,” said Love. “Problems like these will require the use of large, scalable, fault-tolerant quantum hardware that is under development, and it’s critical to start building and testing these quantum methods today.”

Azure Quantum, however, offers an alternative to developers who aren’t keen to wait for a fully-scaled quantum computer to be available. Microsoft is effectively engaged in the field of quantum-inspired technology – a method that consists of emulating some quantum effects on classical computers to start reaping the benefits of quantum computing in the nearer term.

The idea is to mimic certain quantum behaviors in order to develop quantum-inspired algorithms that can then be run on classical hardware to solve difficult problems, to achieve significant speedup over traditional approaches. The method is particularly suited to optimization problems.

Azure Quantum customers, therefore, can use quantum-inspired optimization solvers from Microsoft and partner company 1QBit, to run large problems in Azure on classical CPUs, GPUs and FGPAs.

The quantum-inspired methods provided by Azure Quantum were used by advanced materials company OTI Lumionics to design next-generation OLED displays, for example. Ford has also been trialing the technology to improve traffic optimization, with promising results in scenarios involving as many as 5,000 vehicles.

The preview of Azure Quantum also saw software company Jij and Toyota Tsusho working with quantum-inspired tools to solve mobility challenges, optimizing the timing of traffic lights to relieve city congestion. The researchers were able to reduce car waiting time by 20% when compared to conventional optimization methods.

“We’ve seen exciting work already from customers and partners in traffic optimization, financial modelling, transportation and logistics, materials design, and more,” said Love. “I’m most excited to see what new ideas developers come up with once they’ve had the tools and solutions in their hands, particularly for solutions to our biggest challenges in climate and the environment.”

In parallel to running the Azure Quantum platform, Microsoft is currently in the process of developing its own quantum computer, but the technology isn’t advanced enough to compete against other cloud-based quantum processors. The tech giant is pursuing a different method from its competitors, based on a so-called “topological qubit”, which Microsoft argues will be protected from noise and will do a better job of retaining information.

Azure Quantum aims to create a one-stop shop for developers, complete with the software and hardware resources that are necessary to build quantum applications.



Intel architect Koduri says every chip will be a neural net processor

Intel argues the acceleration of matrix multiplications is now an essential measure of the performance and efficiency of chips, with a raft of capabilities for forthcoming processors Alder Lake, Sapphire Rapids and Ponte Vecchio.




Intel’s head of architecture, Raja Koduri.

The processing of neural networks for artificial intelligence is becoming a main part of the workload of every kind of chip, according to chip giant Intel, which on Thursday unveiled details of forthcoming processors during its annual “Architecture Day” ritual.

“Neural nets are the new apps,” said Raja M. Koduri, senior vice president and general manager of Intel’s Accelerated Computing Systems and Graphics Group, in an interview with ZDNet via Microsoft teams.

“What we see is that every socket, it’s not CPU, GPU, IPU, everything will have matrix acceleration,” said Koduri.

Koduri took over Intel’s newly formed Accelerated Computing Unit in June as part of a broad re-organizaton of Intel’s executive leadership under CEO Pat Gelsinger.

Koduri claimed Intel by speeding up the matrix multiplications at the heart of neural networks, Intel will have the fastest chips for machine learning and deep learning, and any form of artificial intelligence processing.

Also: Intel forms Accelerated Computing, Software business units

“We are the fastest AI CPU, and our Sapphire Rapids, our new data center architecture, is the fastest for AI workloads, our new GPUs, nobody so far, there have been dozens of startups, but nobody beat Nvidia on a training benchmark, and we have demonstrated that today.”

Intel showed a demonstration in which its forthcoming stand-alone GPU, Ponte Vecchio, bested Nvidia’s A100 GPU in a common benchmark neural network task, running the ResNet-50 neural network to categorize images in the from the ImageNet library of photographs.


Intel claims pre-prodution versions of its Ponte Vecchio GPU can best Nvidia at a standard measure of neural network performance in deep learning applications, where the ResNet-50 neural network has to be trained to process thousands of images per second from the ImageNet picture collection.


Intel claims Ponte Vecchio can also create predictions faster with ResNet-50 on ImageNet compared to Nvidia and others in what’s known as inferene tasks.

In the demonstration, Intel claims Ponte Vecchio, in pre-production silicon, is able to process over 3,400 of the images in one second, topping previous records of 3,000 images. That is for neural network training. In the area of inference, when a trained neural net makes predictions, Ponte Vecchio is able to make predictions for over 43,000 images in a single second, topping what it cites as the competing top score of 40,000 images per second.

Intel’s Xeon chips have tended to dominate the market for AI inference, but Nvidia has been making inroads. Intel has litle share in neural network training while Nvidia dominates the field with its GPUs.

Koduri said the company intends to compete against Nvidia in the annual bake-off of AI chips, MLPerf, where the company claims bragging rights on ResNet-50 and other such benchmark tasks.

The architecture day focuses on Intel’s roadmap for how its chips’ design of circuits will lay out the transistors and the functional blocks on the chip, such as arithmetic logic units, caches and pipelines.

An architecture change, for Intel or for any company, brings new “cores,” the heart of the processor that controls how the “datapath” is managed, meaning, the storage and retrieval of numbers, and the control path, meaning, the movement of instructions around the chip.

Many aspects of the new CPUs have been disclosed previously by Intel, including at last year’s Architecture Day. The company has to get software designers thinking about, and working on, its processors years before they are ready to roll off the line.

For instance, the world has known Intel was going to bring to market a new CPU for client computing, called Alder Lake, which combines two kinds of CPUs. On Thursday, Intel announced it would rename those two, formerly code-named Golden Cove and Gracemont, as “Performance Core” and “Efficient Core.” More details on that from ZDNet’s Chris Duckett.

Also: Intel unveils Alder Lake hybrid architecture with efficient and performance cores

Among the new disclosures today are that the new CPUs will make use of a hardware structure known as the “Thread Director.” The Thread Director takes control of how threads of execution are scheduled to be run on the processor in a way that adjusts to factors such as energy use, to receive the operating system of some of that role.

“The entire way the OS interacts with hardware is a hardware innovation.” Thread Director, Intel says, “provides low-level telemetry on the state of the core and the instruction mix of the thread, empowering the operating system to place the right thread on the right core at the right time.”


Thread Director, a hardware schedular that will take over some responsibilty for managing threads of instruction from the operating system, was one of the new items discussed at Intel’s archticture day.


Another new disclosure is how the chips will make use of memory bandwidth technologies. For example, Intel’s forthcoming data center processor, Sapphire Rapids,

Alder Lake will support PCIe Gen 5, DDR 5 memory interfaces, it was disclosed.

Intel disclosed that its forthcoming data center processor, Sapphire Rapids, the next era of its Xeon family, will have certain performance aspects. For example, the chip will perform 2,048 operations per clock cycle on 8-bit integer data types using what Intel calls its AMX, or “advanced matrix extensions.” Again, the emphasis is on neural net kinds of operations. AMX is a special kind of matrix multiplication capability that will operate across separate tiles of a chip. Sapphire Rapids is composed of four separate physical tiles that each have CPU and accelerator and input/output functions, but that look to the operating system like one logical CPU.


Intel claims Sapphire Rapids is optimized for AI via extensions such as AMX.


Sapphire Rapids is an example of how Intel is increasingly looking to the physical construction of chips across multiple substrates as an advantage. The use of multiple physical tiles, for example, rather than one monolithic semiconductor die, makes use of what Intel dubs its embedded multi-die interconnect bridge.

Thursday’s presentation featured lots of discussion of Intel process technology as well, which the company has been seeking to straighten out after mis-steps in recent years.

Because of the limits of Moore’s Law’s traditional scaling of transistor size, said Koduri, it is essential to utilize other advantages that Intel can bring in chip making, including stacking of multiple die within a package.

“Today it is far more important for architects to leverage every tool in our process and packaging tool chest than it was a decade ago to build this stuff,” said Koduri. “Before, it was, yeah, yeah, yeah, the traditional Dennard Scaling, Moore’s Law took care of it, take my new CPU, put it on the new process node, you get it done.”


He was referring to the observation by Robert Dennard, a scientist at IBM, in the 1970s that as more and more transistors are packed into a square area of a chip, the power consumption of each transistor goes down so that the processor becomes more power-efficient. Dennard Scaling is regarded as being effectively dead, just like Moore’s Law.

Both Alder Lake and Sapphire Rapids will be built by Intel using what it is calling its “Intel 7” process technology. That is a renaming of what had been called “10nm Enhanced SuperFin,” whereby the company adds a more-efficient there-dimensional transistor, a FinFet, to the 10-nanometer process for greater efficiency of energy usage. (The Intel 7 designation is part of a broad renaming of Intel’s process technology that the company unveiled in July.)

At the same time, some parts of Intel’s parts will be made using production at Taiwan Semiconductor Manufacturing, which supplies Intel’s competitors. That move to outsource selectively is an extension of Intel’s existing use of outsourced transistor production. It is what CEO Gelsinger has called Intel’s “IDM 2.0” strategy.

Also: Intel: Data bandwidth, sparsity are the two biggest challenges for AI chips

Today, said Koduri, “it is a golden age for architects because we have to use these tools much more effectively.” Koduri was echoing a claim made in 2019 by U.C. Berkeley professor David Patterson that computer architects have to compensate for the device physics that mean Moore’s Law and Dennard Scaling no longer dominate.

Of course, with Nvidia continuing to innovate in GPUs, and now planning to unveil its own CPU, “Grace,” in coming years, and with startups such as Cerebras Systems building entirely new kinds of chips, the target for Intel in AI is not simply to make its processors more AI friendly. It must be to change the way the field of AI goes about its work.

Asked how Intel’s various innovations may change the way neural networks are built, Koduri said that the numerous kinds of processor types now proliferating at Intel and elsewhere will have to cooperate much more and function less apart, to cooperate on tasks.

“The workloads are definitely going in the direction where these things called CPUs, GPUs, DPUs, and memories talk to each another way more than they are talking to each other right now.”

“They will be talking to each other, they’ll be in closer collaboration between these things, to get the work done, than you have seen in the first five years of deep learning.”

Koduri was referring to the period of 2016 to 2021 as “the first five years of deep learning,” as he sees it. “The next five years will bring all these things more closer together.”

Koduri took over Intel’s newly formed Accelerated Computing Unit in June as part of a broad re-organizaton of Intel’s executive leadership under CEO Pat Gelsinger.


Continue Reading


Hands-on with Samsung Galaxy Z Fold 3 and Z Flip 3: Sturdier and sleeker

The Galaxy Z Fold 3 and Galaxy Z Flip 3 are interesting improvements from past iterations with new features added to hardware and user interface.



z-fold-3-flip-3-2.jpg Image: ZDNet/Cho Mu-Hyun

Samsung unveiled its latest foldable smartphones Galaxy Z Fold 3 and Galaxy Z Flip 3 with great flare at this week’s Unpacked event, making the promise it will take the category mainstream. The full specs of these devices can be found here, but here are my first impressions of the devices.

Refined hardware and design

The Galaxy Z Fold 3 shares essentially the same aesthetic design as its predecessor, the Z Fold 2, albeit with some minor changes like the rear cameras, which now form a straight line instead of being in a square block.

When first holding the Z Fold 3, the first thing I noticed was it seemed sturdier than the Z Fold 2, especially when the device was folded shut or unfolded.

When folded shut, the Z Fold 3 felt compact and tightly held together. It’s a good feeling. Samsung said the Z Fold 3 has a thinner hinge so this may be what has helped the sides shut closer together. But I am guessing minor upgrades all-around in the cover glass material and display panel’s curvature also contributed to the improved sturdiness.

When unfolded, viewing the device from top or bottom, the two sides also looked better aligned, forming a visually pleasing straight line. For the Z Fold 2, one side would always be slightly bumped by the hinge, but this is now gone.

According to Samsung, the Z Fold 3 is supposedly 11 grams lighter than its predecessor, but this doesn’t really register and continues to feel heavier than conventional smartphones.

With the Galaxy Z Flip 3, you can tell right away that the company really wanted to ramp up the “cool” factor, especially by providing the new cream colourway.

The highlight of the device is really the bigger cover screen on the outside. Samsung intentionally bumped up the screen size from last year’s iteration and boy does it look stylish.

Comparing the aesthetics of Samsung’s new foldables, the glass covers on the Z Flip 3 are more noticeable compared to the Z Fold 3 as the former covers the body of the phone rather than just the screens.

Both devices have IPX8 water resistance, which means they can survive being submerged in 1.5 meters of water for 30 minutes. This isn’t an invitation to dump them deep in the water, but I sprayed water on both devices using a shower head for five minutes straight, and they both worked fine afterwards. I think it’s safe to say both foldables will fare fine to carry in the rain or around a swimming pool without worry.

z-fold-3-flip-3-3.jpg Image: ZDNet/Cho Mu-Hyun Smoother screen and user interface

The screens for both the Z Fold 3 and Z Flip 3 feel slightly toned down compared to their predecessors in terms of colour and brightness. The overall hues of these screens do still look more natural, however.

The two new devices both continue to offer the best OLED screen technology that Samsung can offer with both displays having 120Hz refresh rates to boot.

The overall user interface for both devices also feel improved. Apps have so far loaded up fast and screen transitions when altering the phone from folded to unfolded to flex mode are smoother. I think this is in part thanks to the hardware boost from Qualcomm’s Snapdragon 888 processor as well as Samsung’s improved optimisation capability when it comes to these form factors.

There are also some changes to on-screen icons, buttons, and app bars that generally look more polished compared to previous iterations. More third-party apps, such as BBC or Naver, also seem better integrated for the main screens, especially for the Z Fold 3. Pre-loaded apps had either none or less empty black spaces than before.

Turning to the crease, it’s still there for both devices. For the Z Flip 3, the crease looks identical to its predecessor’s, which means it is very noticeable and you can feel it most of the time. The Z Fold 3’s crease, meanwhile, is less noticeable as the horizontally wider screen just visually absorbs it better.

z-fold-3-s-pen.jpg Image: ZDNet/Cho Mu-Hyun S Pen first impressions

Samsung is supporting S Pen for the first time on the Z Fold 3. I found the screen’s response time to the stylus to be equal or even better than what I experienced with the Note series. There were no noticeable latency problems while I scribbled on the Samsung Note app.

The big question I had before I tried this feature was how the S Pen would hold up against the crease in the middle. Upon use, I was pleasantly surprised that I could write over it without any issues. It is noticeable, but it never interrupted the writing process. Enfolding the Z Fold 3 with a dedicated case that can house the S Pen also made the device feel premium and reliable.

The second big question I had was how much the S Pen would be used during day-to-day tasks with the Z Fold 3. This is something that will have to be tested out in the coming weeks.

For the Note series, the S Pen was not only an integral part of the experience, but the motions you had to go through to use it were also simple: You take it out and start jotting things down. I also usually used the S Pen while holding the Note device in the air.

By comparison, the Z Fold 3 adds one more step of having to unfold the device, which I have usually done with both hands due to the strength of the magnets holding the sides together. The Z Fold 3 is also much heavier than the Note series, so I have found using the S Pen while holding the device up with one hand a little more difficult.

Z Fold 3’s under-display camera

Samsung touted the under-display camera on the Z Fold 3 heavily during its Unpacked event. Explaining how the new tech works, the front camera is placed under the main screen and is hidden when not in use.

My first impression is that the feature seems less subtle than, perhaps, it should be. When the camera is turned off, I thought the screen covering would be more uniform with the content playing around it, but this is not the case. When you are not using the camera and holding the screen far away, the hole is not noticeable. But when holding the screen slightly closer, I could see a flickering dotted circle where the hole is.

Samsung representatives said this happens when the hole area is experiencing different light transmittance to the remaining areas of the screen and that the dotted circle is the camera pattern being shown. It is something you notice every now and then but it remains to be seen how it will affect daily use, if at all.

Additionally, Samsung has opted for a 4MP front camera instead of a 10MP for the under display camera. The company claimed AI and other software make this pixel downgrade unnoticeable and, so far, I have only taken a few pictures with the 4MP camera and will need to test it fully to determine if the lower pixel count is worth the tradeoff.

z-fold-3-with-case-2.jpg Image: ZDNet/Cho Mu-Hyun


Continue Reading


Intel unleashes Beast Canyon NUC 11 Extreme gaming desktop kit

The latest in Intel’s line of Canyon-nicknamed Next Unit of Computing Extreme PC kits, the Beast version comes equipped with either the latest Core i9 CPU or unlocked Core i7 processor and support for a full-size graphics card in a small-form-factor chassis with the familiar skull logo.




Intel “Beast Canyon” NUC 11 Extreme Kit

Intel’s Next Unit of Computing (NUC) platform has been around long enough that it’s probably lost its “next” status, but that doesn’t stop the company from continuing to churn out new versions as its Core processors get refreshed. With each new iteration of the NUC, we can expect an NUC Extreme version, designed for gamers and other high-performance PC users, housed in a compact case with a skull logo etched on it, and code-named with a scary-sounding Canyon-based moniker.

Following in the lineage of such predecessors as Skull Canyon, Hades Canyon and Ghost Canyon, the new Beast Canyon has just been released under the official name of NUC 11 Extreme Kit. As with its predecessors, the kit is based around the latest top-performing Intel Core processors housed in a small-form-factor chassis with room for a full-size graphics card, up to 64GB of RAM, and a number of storage options via a quartet of M.2 slots. The chassis is branded with the signature (removable) skull logo complete with RGB lighting to customize it to your taste. It features Intel’s Compute Element, which packages the CPU, motherboard and memory and storage slots into a single card-like unit that can be swapped out for a different version.

The Beast Canyon NUC is available in two configurations based on your eight-core Tiger Lake processor choice — either an unlocked Core i7-11700B or a Core i9-11900KB. (Note that both are 65W chips, so they won’t be as blazing as the beefier 125W full desktop versions.) While you supply the RAM, graphics and storage, the kit does come with a 650-watt power supply and a trio of 92mm fans to handle the components you add (up from the 80mm fans in the Ghost Canyon NUC). It also supplies a range of connectivity options, including a pair of Thunderbolt 4 ports, eight USB 3.1 ports, HDMI 2.0b port, and Wi-Fi 6E AX210 and Bluetooth 5.2 wireless antennas.

Beast Canyon also comes with a beast of a price tag, considering you’ll have to spend a hefty amount on the high-performance memory, GPU and storage this system demands. Intel’s starting price for the Core i7 version is $1,150 while the Core i9 edition will start at $1,350. If you want to find out if it’s worth the cost, check out our sister site CNET’s hands-on preview of the NUC 11 Extreme Kit.


Continue Reading