Design: June 2010 Archives

After going through an acquisition spree of its own, Virage Logic has agreed to be bought by EDA company Synopsys for about $315m in cash. The move follows only a month after Cadence Design Systems’ announcement of its plan to buy memory-IP specialist Denali.

The rationale from Aart de Geus, CEO of Synopsys, in the conference call for the purchase is not all that different from Cadence’s. “Putting together these IP blocks and making sure they work together is essential,” he said.

Like Cadence, Synopsys is expecting an increase in the use of third-party IP by chipmakers. “The trend of IP outsourcing is a massive trend. People are moving towards using commercial IP where they can except where they can add differentiation,” de Geus said, adding that a growing number of companies are realising what they thought was differentiated home-grown IP could be just a millstone. “What companies produce is not necessarily differentiated. Some of the standards they need to work with are so complex, their own people can’t necessarily do a cost-efficient job there. The downturn has helped many executives realise that outsourcing these efforts makes a lot of sense.”

De Geus argued that very little of the IP produced by Virage overlaps with that created at Synopsys. The EDA company backed away from the acquisition of memory-IP specialist Mosys in 2004 and has not developed its own. Even earlier, a move into standard-cell IP through the acquisition of Silicon Architects in the mid-1990s ended when Synopsys wound up the operation. De Geus explained that the move was “premature” and that it had gone into the business before it had a place-and-route tool (later acquired through its 2002 purchase of Avant) that would make use of those cells.

Virage’s roots lie in low-level IP such as memory cores and standard cells but has, more recently, moved into larger cores through the acquisition of ARC International and NXP Semiconductors’ IP design group. Potentially, the addition of the Virage line-up brings Synopsys into conflict with ARM, with which Synopsys developed the verification reuse and low-power design methodologies.

De Geus argued that the processors from ARC complement those of ARM, agreeing with one analyst’s use of the term ‘ancillary’ to describe them. “ARM is one of our most important partners and is a friend of the company, a company around which we have built a number of our offerings. ARC is interesting because it so supports the ARM core processor. It provides a controller that can be used for subtasks to offload the main processor. It will be interesting to see how we can build solutions with ARM.”

ARM has gradually backed away from areas that Synopsys has moved into in the IP space. The ARM Primexsys portfolio of peripherals failed to take off but Synopsys’s main successes in IP lie in this area. Although ARM has tried to get into digital signal processing (DSP), it is another area that the UK company has discontinued its effort, choosing to focus on graphics instead. ARC has, in contrast, focused heavily on audio and similar DSP-based products.

Where ARM and Synopsys will compete head-on is in standard-cell libraries and it’s a business where ARM has spent heavily to keep up with process technology. Having Virage provides Synopsys with a way to align design-implementation tools with the cell libraries, something that is becoming more important as design rules get ever more restrictive. Will ARM engineers get the same access to the Synopsys tools people post-acquisition?

Belgium-based research institute IMEC has teamed up with Intel and a group of local universities on a programme that is intended to pave the way for exascale computers – supercomputers that are close to a thousand times more powerful than those being commissioned today.

“In 1997, we saw the first terascale machines. A few years ago, petascale appeared. We will hit exascale in around 2018,” said Wilfried Verachtert, high-performance computing project manager at IMEC, explaining that these machines will be able to perform 1018 floating-point calculations per second.

The most powerful supercomputer being made today is the Cray XT5 Jaguar with a rated performance of close to 2 petaflops

At a presentation held to celebrate the opening of a new cleanroom at IMEC and the foundation of the ExaScience lab, Martin Curley, senior principal engineer and director of Intel Labs Europe, said: “We are focused on creating the future of supercomputing. We have a job to do of creating a sustainable future. Exascale computing can really change our world.”

Curley said a the two main problems will be power consumption and the difficulty of writing highly parallel software. The performance required is the equivalent of 50 million laptops which would demand thousands of megawatts of power.

He explained that, by the time exascale computers are likely to appear, silicon-chip geometries will have dropped to 10nm. Although these devices can potentially run at tens of gigahertz, Curley said power consumption concerns would force supercomputer makers to run them much more slowly and potentially even slower than today’s processors. The move will demand billions of processing units in one supercomputer. “How are we going to achieve that? The only way is through billion-operation parallelism.”

Curley added: “Even with just 10 to 12 cores, we see the performance of commercial microprocessors begin to degrade. The biggest single challenge is parallelism.”

The ExaScience lab will, as its test application, work on software to predict the damage caused by the powerful magnetic fields that follow solar flares in the hope of providing more accurate information to satellite operators and the power-grid companies.

With current-generation supercomputers, the mesh used to analyse field strength has elements that are a million kilometres across, far larger than the Earth itself. An exascale machine would make it possible to scale the mesh size down to elements that are 10,000km across.

Verachtert said the project aims to get the power consumption of a machine from 7000MW – based on today’s technology - to 50MW, “and that is still higher than we want”.

One problem with a supercomputer than contains millions of discrete processors, each one containing thousands of processing elements, is the expected failure rate. “My optimistic projection is that there will be a failure every minute. It’s possible that there will be a failure every second. We have to do something about that.”

The failure rate will have a knock-on effect on programming. Today, it is possible to break up applications so that portions can be re-run after a hardware failure, which may happen once a day. That is impossible as the size of the machine scales up. Verachtert said the methods programmers use will have to take account of processors failing, using checkpoints and other techniques such as transactional memory – which Intel has researched heavily already – to allow code to be re-run automatically without disrupting other parts of the application.

You’d think people would be bored with forming embedded Linux consortia by now. But the creation of Linaro demonstrates that there was least one unfilled niche.

Set up by ARM and a bunch of chipmakers, Linaro is different to some of the others that have appeared over the past few years, which are mainly intended to provide ready-made environments for mobile phones and internet tablets.

All of these groups run the risk of being home to nothing more than tumbleweed. A bundle of source code gets dumped in shortly after creation but interest wanes as developers concentrate on the major platforms - right now that’s likely to be Android.

At first sight, Linaro does not look too interesting to developers working with Android and its analogues. But that probably won’t affect Linaro’s success because this group scratches an itch that the chipmakers themselves have, which is the massive cost of developing low-level software for their applications processors. Since the late 1990s, as I describe in this feature for Engineering & Technology, the chipmakers have been bundling more and more software with their devices. But they haven’t picked up any more cash for their efforts.

Instead of doing all the work individually, and spend loads of money, they can club together. Most of them are ARM-based and they have broadly similar system architectures, although things such as the graphics accelerators will differ substantially. Amortising the cost of the support software for environments such as Android and WebOS is not going to solve the chipmakers’ software problems overnight - there are other platforms they need to support - but more cooperation at this level can put a dent in the heavy cost of development.

ARM has already kicked off a similar endeavour for its Cortex-M microcontrollers, although it operates differently to the open-source consortia. The Cortex Microcontroller Software Interface Standard (CMSIS) is designed to support a library of functions that users can bolt into their embedded controllers and has the backing of Cortex-M licensees such as NXP Semiconductor and STMicroelectronics. You can view Linaro as an extension of that kind of effort but with a different target: the Cortex-A series and mobile terminals.

It will be interesting to see if other independent developers turn up and contribute but that may not matter as these are companies with a ton of developers now who have customers who want to base their systems around at least one of the higher-level Linux platforms.