OVP hitches ride on TLM2 wagon

| | Comments (2)

If you don't like stuff about obscure standards in system-level modelling, look away now.

A round-robin email from Simon Davidmann of Imperas and the self-styled Open Virtual Platforms (OVP) group arrived earlier today saying they have added support for version 2.0 of the Transaction Level Modelling standard (TLM), published by the Open SystemC Initiative (OSCI). No, I'm not proud of myself for getting three obscure acronyms into the one sentence.

OVP launched onto the scene just as the work on TLM2 was drawing to a close. The idea behind both is to build fast simulations of hardware designs so that you can debug them. Not only that, you can debug software programs that run on top of that hardware. It's taken since 1994 to get to the point where the electronics industry, or at least the chip-design part, accepts this is a viable way of designing hardware, but it is happening.

Imperas claimed OVP could be much faster than TLM2 and, although it was not meant to be a complete replacement for TLM2, the emphasis within their camp was that you would, for the most part, not need OSCI's stuff. The people behind OVP have, apparently, realised that this was not a viable position and have now decided to add "native TLM2 support to OVP".

If you want to get an OVP processor model that has TLM2 interfaces, you can drop a line to the people at OVPWorld. The OVP claim is that you can get hundreds of MIPS out of their models. As most processor models are written in C and then given a TLM2 interface, I've never been clear why OVP's approach should work out any faster but the system architects out there can try it out.

There is a forum at the OVP website, so I imagine you will be able to ask questions about the OVP-TLM2 interface stuff there.

2 Comments

Chris, thanks for making the your readers aware that OVP models are now usable in TLM2.0 simulation environments. I do have a few comments though.

First, yes we did claim that “OVP could be much faster than TLM2”, and in fact we have models that run at 1,000 MIPS plus - whereas the fastest anyone has claimed for a TLM2.0 environment is around 200MIPS - so yes OVP has proved to be much faster.

Second, you state that with OVP you do not need OSCI's stuff - this is true - there are over 250 people who have used OVP without OSCI... and incorrectly state that we have now "realized that this is not a viable position and have now decided to add native TLM2 support to OVP". Actually, there is no good, easy way to create high performance processor models in SystemC or TLM2.0 - they are just not designed for that - whereas the Virtual Machine Interface (VMI) API of OVP allows a processor model to be created in 4-6 weeks that runs 100's of MIPS. From the beginning of OVP these processor models could be used in C, C++, and SystemC simulations, and now with the new technology can be used natively with TLM2.0.

And for your information - the models are written in C - but the secret sauce is the way we take that C code and turn it into a Just In Time Morph Code processor simulator - that can run target instructions in as few as 3 host instructions - thus enabling us to model whole virtual platforms that can boot Linux in a few seconds on an average laptop.

So Chris, thank you for the publicity of our newly available TLM2.0 models - they are now in Beta - so anyone who wants fast, free, easy to use simulations of processors, platforms, and systems please come and visit www.OVPworld.org where they are freely available.

Thanks

Simon Davidmann

"Actually, there is no good, easy way to create high performance processor models in SystemC or TLM2.0 - they are just not designed for that."

I can see your point there but I think painting this as OVP vs SystemC modelling obscures what really happens in the model business. The processor models available for running with SystemC prototypes are not necessarily written in that language. The ones I've seen tend to be C/C++-native with SystemC/TLM interfaces. And to eke out a bit more performance they run memory accesses internally instead of trying to have them simulated in SystemC, which would slow them down.

Having a simulator that you can program to emulate a processor is clearly A Good Thing. But, everybody in this business claims they have the fastest models and deploy strawman arguments as to why they should run quicker. And there is too much focus on the processor when the bottlenecks lie in how modelling complete prototypes rather than free-running processor-memory subsystems.