Difference between revisions of "Scientific Applications"
(→[http://www.ks.uiuc.edu/Research/namd/ NAMD]) |
|||
(3 intermediate revisions by 2 users not shown) | |||
Line 2: | Line 2: | ||
== [http://cobweb.ecn.purdue.edu/~gekco/nemo3D/ NEMO3D] == | == [http://cobweb.ecn.purdue.edu/~gekco/nemo3D/ NEMO3D] == | ||
NEMO3D calculates eigenstates in (almost) arbitrarily shaped semiconductor structures in the typical column IV and III-V materials. It has been running as an educational version in [https://www.nanohub.org/ nanoHUB] for a year with executions that take a few seconds. This version has been used by over 600 people. It is expected to be running soon on large systems with executions that will require hours of CPU time with hundreds of users. The code is currently being ported to the dual-core Cray XT3 at [http://www.psc.edu PSC]. | NEMO3D calculates eigenstates in (almost) arbitrarily shaped semiconductor structures in the typical column IV and III-V materials. It has been running as an educational version in [https://www.nanohub.org/ nanoHUB] for a year with executions that take a few seconds. This version has been used by over 600 people. It is expected to be running soon on large systems with executions that will require hours of CPU time with hundreds of users. The code is currently being ported to the dual-core Cray XT3 at [http://www.psc.edu PSC]. | ||
− | * [[ | + | * [[NEMO3D | Performance Results]] |
− | |||
== [http://lca.ucsd.edu/portal/software/enzo ENZO] == | == [http://lca.ucsd.edu/portal/software/enzo ENZO] == | ||
ENZO is an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation. Understanding the performance of AMR applications on distributed memory architectures is challenging, due to the dynamic multilevel data structures and variety of communication patterns involved. | ENZO is an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation. Understanding the performance of AMR applications on distributed memory architectures is challenging, due to the dynamic multilevel data structures and variety of communication patterns involved. | ||
− | * [[ | + | * [[ENZO | Performance Results]] |
− | |||
== [http://www.ks.uiuc.edu/Research/namd/ NAMD] == | == [http://www.ks.uiuc.edu/Research/namd/ NAMD] == | ||
− | Development of NAMD is a collaborative effort between the Theoretical and Computational | + | Development of NAMD is a collaborative effort between the Theoretical and Computational Biophysics Group (TCBG) and the Parallel Programming Laboratory (PPL) at UIUC and is based on PPL’s Charm++ parallel programming system, which has extensive support for latency tolerance and dynamic load balancing. Efficient, lightweight communication is critical for Charm++ and the applications built within this framework. |
− | |||
* [[NAMDPerformance | Performance Results]] | * [[NAMDPerformance | Performance Results]] |
Latest revision as of 21:13, 14 July 2009
NEMO3D
NEMO3D calculates eigenstates in (almost) arbitrarily shaped semiconductor structures in the typical column IV and III-V materials. It has been running as an educational version in nanoHUB for a year with executions that take a few seconds. This version has been used by over 600 people. It is expected to be running soon on large systems with executions that will require hours of CPU time with hundreds of users. The code is currently being ported to the dual-core Cray XT3 at PSC.
ENZO
ENZO is an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation. Understanding the performance of AMR applications on distributed memory architectures is challenging, due to the dynamic multilevel data structures and variety of communication patterns involved.
NAMD
Development of NAMD is a collaborative effort between the Theoretical and Computational Biophysics Group (TCBG) and the Parallel Programming Laboratory (PPL) at UIUC and is based on PPL’s Charm++ parallel programming system, which has extensive support for latency tolerance and dynamic load balancing. Efficient, lightweight communication is critical for Charm++ and the applications built within this framework.