<<< Previous speaker Next speaker >>>

Ivan Posva, Azul Systems

 Ivan  Posva Ivan Posva has more than ten years industry-leading experience and influence in designing, developing and implementing Java(tm) virtual machines on multiple processor architectures and operating systems. In his role as senior VM architect at Azul Systems, Ivan serves as one of the primary software engineers and influencers of the company's award winning technology, including speculative locking, pauseless garbage collection, stack-based allocation, in-depth virtual machine monitoring and performance characteristics.
Ivan was previously the Java virtual machine technical lead for Mac OS X at Apple where he was the architect of the Java Shared Archive technology, and after that was a senior VM engineer for Entise Systems before joining Azul Systems.
Ivan holds a Diplom Informatikingenieur from ETH Zürich, Switzerland.

Presentation: "Network Attached Processing: Tapping 384-Way SMP, 256GB Java Technology Compute Bricks"

Track:   Scalable Computing

Time: Wednesday 10:15 - 10:45

Location: Conference Hall

Abstract:

Container-based application environments represent the dominant model for today's business application development. Gartner estimates that 50% of all new business applications are being developed for such platforms, reaching 80% by 2008*. Container-based environments are at the core of a tremendous number of application deployments, with the rate of application and service deployments steadily growing. The business benefits of these new applications come with major IT requirements. IT organizations are deploying, maintaining, and managing an ever-growing number of applications. Delivering predictable, reliable, and scalable service levels for enterprise applications has become a never-ending challenge.

This session reviews network attached processing, a fundamentally new model for delivering massive amounts of compute power to Java technology-based applications running on existing server hardware, operating systems, and middleware platforms. This new model addresses continuous deployment and management challenges and eliminates many existing barriers to achieving predictable service levels in an unpredictable world. In this session, we cover:
1. The network attached processing deployment model: how massive compute power can now be transparently injected into otherwise unmodified environments
2. The characteristics of compute pools and the power behind network attached processing
3. Some key technical features and benefits of network attached compute pools:a) Scaling of individual virtual machine instances to hundreds of processor cores and tens of Gigabytes of memoryb) Use of optimistic concurrency techniques at the Java Virtual Machine level to increase the natural scaling of unmodified container-based applicationsc) Elimination of garbage collection pauses and the common practical barriers to using very large heaps in applications based on Java software and Java 2 Platform, Enterprise Edition (J2EE TM platform)
4. Practical experience in deploying network attached processing
5. Future application developments enabled by this new class of unbound compute power.

*Source: Gartner, September 2004

Password protected Download slides

Presentation: "Java Technology Performance Myths Exposed"

Track:   Java 5.0

Time: Wednesday 11:00 - 12:00

Location: Conference Hall 2

Abstract: Many Java technology performance myths are running around. Many stem from the early days of the Java platform, where simple (or missing) JITs and naive GC algorithms conspired to make Java technology slow. The Java platform has come a long way since then, and many of these early bits of performance wisdom are just plain wrong.In this session we google for some advice for making Java technology faster, and put the advice to the test: We compare tweaked and untweaked code on a wide range of modern Java Virtual Machines (JVM machines) to see what makes Java technology run faster. This talk is more about coding styles and the lifetime of "performance knowledge" than raw JVM machine speed. No one JVM machine wins all races or loses all races, and nearly all performance tweaks from five years ago are now strictly counterproductive. An updated version of a talk given at the 2003 JavaOne conference, this talk focuses on newer JVM machines and newer performance tweaks.

Password protected Download slides

Presentation: "Speculative Locking: Breaking the Scale Barrier"

Track:   Scalable Computing

Time: Wednesday 14:30 - 15:30

Location: Conference Hall

Abstract:

Java TM technology programs are often multithreaded, and multithreaded programs require locking. Locking serializes execution, and serial execution limits a program's ability to scale. All the multicore CPUs in the world will not help if all threads are blocked on a lock. In practice, many locks guard against very rare race conditions. If two threads arrive at a locked region, and if they are not going to write to a location the other thread accesses, then the locking is not needed (this time!) and the threads could run in parallel. Of course, it's generally impossible to tell if the threads will experience data contention ahead of time, so the lock is required.

Now comes Speculative Locking: a technique employed in a new server Java Virtual Machine. While maintaining full Java technology lock semantics, Speculative Locking allows many threads to speculatively execute at the same time into a locked region. Only threads that actually hit data contention will need to roll back and retry. Existing unmodified programs and commonly used container libraries, such as java.util.Hashtable, can use Speculative Locking to scale to large numbers of CPUs. Developers can use simple coarse-grained locks instead of spending time debugging fine-grained locking mechanisms.

In this session we explain how newly available hardware and JVM software will allow existing servers to use Speculative Locking. We look at some common programming idioms that developers might think scale well but do not (for example large slow-moving HashTables commonly used as caches). We compare this locking idiom with existing locking schemes in modern JVM machines and demonstrate dramatically improved scaling.

Prerequisite: A general understanding of multithreaded Java programming and scale-up issues.

Password protected Download slides