<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://www.nic.uoregon.edu/mediawiki-point/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Shirley</id>
	<title>Point - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://www.nic.uoregon.edu/mediawiki-point/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Shirley"/>
	<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Special:Contributions/Shirley"/>
	<updated>2026-04-19T10:32:42Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.6</generator>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=247</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=247"/>
		<updated>2011-06-06T11:17:29Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Workshops==&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops===&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at [http://sc08.supercomputing.org SC'08], [http://www.linuxclustersinstitute.org/conferences/index.html LCI'09], [http://www.iccs-meeting.org/iccs2009/ ICCS 2009], [http://www.teragrid.org/tg09/ TeraGrid'09], [http://sc09.supercomputing.org SC '09], [http://www.linuxclustersinstitute.org/conferences/index.html LCI'10], and [http://www.sc10.supercomputing.org SC'10].&lt;br /&gt;
&lt;br /&gt;
===Workshop Materials===&lt;br /&gt;
The POINT LiveDVD which contains lecture slides, tool software, and workshop examples, is available [http://tau.uoregon.edu/point.iso here]. You can use the LiveDVD to boot your computer with a patched Linux kernel that can access the processor hardware counters.&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=246</id>
		<title>Project Info</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=246"/>
		<updated>2011-06-06T11:13:34Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Productivity from Open, INtegrated Tools (POINT) project is funded as part of the NSF's [http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5174 Software Development for Cyberinfrastructure (SDCI)] program. The goal of the this project is to integrate, harden, and deploy an open, portable, robust performance tools environment for the NSF-funded high-performance computing centers. We are leveraging the widely-used [http://tau.uoregon.edu TAU], [http://icl.cs.utk.edu/papi/ PAPI], [http://www.scalasca.org/  Scalasca], and [http://perfsuite.ncsa.uiuc.edu/ PerfSuite] technologies as core components, improving them as necessary to meet user and application needs.&lt;br /&gt;
* [[The POINT of Performance|Project News Release]]&lt;br /&gt;
* [[Milestones|Project Milestones]] (members only)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Four major institutions are collaborating in this project: [http://www.uoregon.edu University of Oregon], [http://www.utk.edu University of Tennessee at Knoxville] and [http://www.ncsa.uiuc.edu National Center for Supercomputing Applications] are developing and integrating the performance tools. The [http://psc.edu Pittsburgh Supercomputing Center] is leading the application engagement and outreach effort.&lt;br /&gt;
&lt;br /&gt;
* [[People|Principal Researchers]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contact ==&lt;br /&gt;
We would like to hear from anyone interested in the POINT project.  If you have any questions, comments, or requests, please [mailto:%70%6f%69%6e%74%40%6e%69%63%2e%75%6f%72%65%67%6f%6e%2e%65%64%75 send us an email].&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=TeraGrid_Support&amp;diff=237</id>
		<title>TeraGrid Support</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=TeraGrid_Support&amp;diff=237"/>
		<updated>2009-07-28T01:34:27Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Software Deployment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In recognition of its highly effective user support record, [http://www.psc.edu PSC] has been tasked with coordinating user support for the entire TeraGrid. The PSC team is introducing the productivity tools suite developed by the project to user support teams at all nine TeraGrid resource provider sites and is leveraging the TeraGrid’s EOT program to deliver talks, tutorials, and MSI activities.&lt;br /&gt;
&lt;br /&gt;
==Software Deployment==&lt;br /&gt;
&lt;br /&gt;
See the [http://rib.cs.utk.edu/rib3app/catalog?rh=41 POINT software catalog and deployment matrix] for information about availability of POINT tools on TeraGrid machines.  See also the [http://hpcsoftware.teragrid.org TeraGrid Software Database].&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=TeraGrid_Support&amp;diff=236</id>
		<title>TeraGrid Support</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=TeraGrid_Support&amp;diff=236"/>
		<updated>2009-07-28T01:26:04Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Software Deployment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In recognition of its highly effective user support record, [http://www.psc.edu PSC] has been tasked with coordinating user support for the entire TeraGrid. The PSC team is introducing the productivity tools suite developed by the project to user support teams at all nine TeraGrid resource provider sites and is leveraging the TeraGrid’s EOT program to deliver talks, tutorials, and MSI activities.&lt;br /&gt;
&lt;br /&gt;
==Software Deployment==&lt;br /&gt;
&lt;br /&gt;
See the [http://rib.cs.utk.edu/rib3app/catalog?rh=41 POINT software catalog and deployment matrix] for information about availability of POINT tools on TeraGrid machines.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=TeraGrid_Support&amp;diff=235</id>
		<title>TeraGrid Support</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=TeraGrid_Support&amp;diff=235"/>
		<updated>2009-07-28T01:25:40Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In recognition of its highly effective user support record, [http://www.psc.edu PSC] has been tasked with coordinating user support for the entire TeraGrid. The PSC team is introducing the productivity tools suite developed by the project to user support teams at all nine TeraGrid resource provider sites and is leveraging the TeraGrid’s EOT program to deliver talks, tutorials, and MSI activities.&lt;br /&gt;
&lt;br /&gt;
==Software Deployment==&lt;br /&gt;
&lt;br /&gt;
See the [http://rib.cs.utk.edu/rib3app/catalog?rh=41] POINT software catalog and deployment matrix] for information about availability of POINT tools on TeraGrid machines.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=234</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=234"/>
		<updated>2009-07-27T20:40:48Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Previous Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Workshops==&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops===&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at [http://sc08.supercomputing.org SC'08], [http://www.linuxclustersinstitute.org/conferences/index.html LCI'09], [http://www.iccs-meeting.org/iccs2009/ ICCS 2009], and [http://www.teragrid.org/tg09/ TeraGrid'09].&lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops===&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
&lt;br /&gt;
[http://scyourway.nacse.org/conference/view/tut138 Productive Performance Engineering of Petascale Applications with POINT and VI-HPS]&lt;br /&gt;
&lt;br /&gt;
===Workshop Materials===&lt;br /&gt;
The POINT LiveDVD which contains lecture slides, tool software, and workshop examples, is available [http://tau.uoregon.edu/point.iso here]. You can use the LiveDVD to boot your computer with a patched Linux kernel that can access the processor hardware counters.&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=233</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=233"/>
		<updated>2009-07-27T20:30:10Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Workshops==&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops===&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.  &lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops===&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
&lt;br /&gt;
[http://scyourway.nacse.org/conference/view/tut138 Productive Performance Engineering of Petascale Applications with POINT and VI-HPS]&lt;br /&gt;
&lt;br /&gt;
===Workshop Materials===&lt;br /&gt;
The POINT LiveDVD which contains lecture slides, tool software, and workshop examples, is available [http://tau.uoregon.edu/point.iso here]. You can use the LiveDVD to boot your computer with a patched Linux kernel that can access the processor hardware counters.&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=232</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=232"/>
		<updated>2009-07-27T20:27:51Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops===&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.  &lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops===&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
&lt;br /&gt;
[http://scyourway.nacse.org/conference/view/tut138 Productive Performance Engineering of Petascale Applications with POINT and VI-HPS]&lt;br /&gt;
&lt;br /&gt;
===Workshop Materials===&lt;br /&gt;
The POINT LiveDVD which contains lecture slides, tool software, and workshop examples, is available [http://tau.uoregon.edu/point.iso here].  You can use the LiveDVD to boot your computer with a patched Linux kernel that can access the processor hardware counters.&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=231</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=231"/>
		<updated>2009-07-27T20:24:59Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Previous Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''This page is under construction!''' ==&lt;br /&gt;
&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops===&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.  &lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops===&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
&lt;br /&gt;
[http://scyourway.nacse.org/conference/view/tut138 Productive Performance Engineering of Petascale Applications with POINT and VI-HPS]&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=230</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=230"/>
		<updated>2009-07-27T20:24:17Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* =Previous Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''This page is under construction!''' ==&lt;br /&gt;
&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
==Previous Workshops==&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.  &lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops===&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
&lt;br /&gt;
[http://scyourway.nacse.org/conference/view/tut138 Productive Performance Engineering of Petascale Applications with POINT and VI-HPS]&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=229</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=229"/>
		<updated>2009-07-27T20:23:48Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* =Upcoming Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''This page is under construction!''' ==&lt;br /&gt;
&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops==&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.  &lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops===&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
&lt;br /&gt;
[http://scyourway.nacse.org/conference/view/tut138 Productive Performance Engineering of Petascale Applications with POINT and VI-HPS]&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=228</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=228"/>
		<updated>2009-07-27T20:23:08Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* =Upcoming Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''This page is under construction!''' ==&lt;br /&gt;
&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops==&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.  &lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops==&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
&lt;br /&gt;
[http://scyourway.nacse.org/conference/view/tut138 Productive Performance Engineering of Petascale Applications with POINT and VI-HPS]&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=227</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=227"/>
		<updated>2009-07-27T20:21:40Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''This page is under construction!''' ==&lt;br /&gt;
&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops==&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.  &lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops==&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
Productive Performance Engineering of Petascale Applications with POINT and VI-HPS&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Workshop Materials===&lt;br /&gt;
&lt;br /&gt;
The POINT LiveDVD which contains lecture slides, tool software, and workshop examples is available [http://tau.uoregon.edu/point.iso here].  You can use the LiveDVD to boot your computer with a patched Linux kernel.&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=226</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=226"/>
		<updated>2009-07-27T20:20:03Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''This page is under construction!''' ==&lt;br /&gt;
&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
* Previous Workshops&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops==&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.  &lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops==&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
Productive Performance Engineering of Petascale Applications with POINT and VI-HPS&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Workshop Materials===&lt;br /&gt;
&lt;br /&gt;
The POINT LiveDVD which contains lecture slides, tool software, and workshop examples is available [http://tau.uoregon.edu/point.iso here].  You can use the LiveDVD to boot your computer with a patched Linux kernel.&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=225</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=225"/>
		<updated>2009-07-27T20:18:45Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Workshops */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''This page is under construction!''' ==&lt;br /&gt;
&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
===Previous Workshops==&lt;br /&gt;
&lt;br /&gt;
We have previously taught workshops and tutorials at SC'08, LCI'09, ICCS'09, and TeraGrid'09.  &lt;br /&gt;
&lt;br /&gt;
===Upcoming Workshops==&lt;br /&gt;
&lt;br /&gt;
November 16, 2009 SC'09 tutorial: &lt;br /&gt;
Productive Performance Engineering of Petascale Applications with POINT and VI-HPS&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Workshop Materials===&lt;br /&gt;
&lt;br /&gt;
The POINT LiveDVD which contains lecture slides, tool software, and workshop examples is available [http://tau.uoregon.edu/point.iso here].  You can use the LiveDVD to boot your computer with a patched Linux kernel.&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=224</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=224"/>
		<updated>2009-07-27T20:08:06Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Portal */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''This page is under construction!''' ==&lt;br /&gt;
&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;br /&gt;
The TAU Portal is an extension of the TAU Performance System. The Portal is designed to help users manage their high-performance computing applications and performance data, and to facilitate collaboration with performance analysis experts. Using the Portal, you can create a password-protected workspace that will be accessible only to you and your colleagues.  You may upload performance data from any of the POINT tools, as well as some third-party tools, into your workspace.  The sidebar of thePortal now has a ParaProf launcher button that will show you all the performance data in the workspace in one ParaProf window.   You can log comments, experiment notes, or performance questions directly on the Portal to be shared with your colleagues.  &lt;br /&gt;
&lt;br /&gt;
Go to the [http://tau.nic.uoregon.edu/user/login TAU Portal] to create an account, login, or see the demo.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=223</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=223"/>
		<updated>2009-07-19T01:17:18Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== '''This page is under construction!''' ==&lt;br /&gt;
&lt;br /&gt;
==Workshops==&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=222</id>
		<title>Outreach</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Outreach&amp;diff=222"/>
		<updated>2009-07-19T01:15:24Z</updated>

		<summary type="html">&lt;p&gt;Shirley: New page: ==Workshops==  ==Portal==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Workshops==&lt;br /&gt;
&lt;br /&gt;
==Portal==&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=220</id>
		<title>Project Info</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=220"/>
		<updated>2009-07-14T22:13:46Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Productivity from Open, INtegrated Tools (POINT) project is funded as part of the NSF's [http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5174 Software Development for Cyberinfrastructure (SDCI)] program. The goal of the this project is to integrate, harden, and deploy an open, portable, robust performance tools environment for the NSF-funded high-performance computing centers. We are leveraging the widely-used [http://tau.uoregon.edu TAU], [http://icl.cs.utk.edu/papi/ PAPI], [http://www.scalasca.org/  Scalasca], and [http://perfsuite.ncsa.uiuc.edu/ PerfSuite] technologies as core components, improving them as necessary to meet user and application needs.&lt;br /&gt;
* [[The POINT of Performance|Project News Release]]&lt;br /&gt;
* [[Milestones|Project Milestones]] (members only)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Four major institutions are collaborating in this project: [http://www.uoregon.edu University of Oregon], [http://www.utk.edu University of Tennessee at Knoxville] and [http://www.ncsa.uiuc.edu National Center for Supercomputing Applications] are developing and integrating the performance tools. The [http://psc.edu Pittsburgh Supercomputing Center] is leading the application engagement and outreach effort.&lt;br /&gt;
&lt;br /&gt;
* [[People|Principal Researchers]]&lt;br /&gt;
&lt;br /&gt;
==SC'09==&lt;br /&gt;
The POINT team will have several events at this year's SC'09 Conference in Portland, OR, including a tutorial. More info will be on our [[News|News page]].&lt;br /&gt;
&lt;br /&gt;
== Contact ==&lt;br /&gt;
We would like to hear from anyone interested in the POINT project.  If you have any questions, comments, or requests, please [mailto:%70%6f%69%6e%74%40%6e%69%63%2e%75%6f%72%65%67%6f%6e%2e%65%64%75 send us an email].&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=219</id>
		<title>Project Info</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=219"/>
		<updated>2009-07-14T22:11:34Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Productivity from Open, INtegrated Tools (POINT) project is funded as part of the NSF's [http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5174 Software Development for Cyberinfrastructure (SDCI)] program. The goal of the this project is to integrate, harden, and deploy an open, portable, robust performance tools environment for the NSF-funded high-performance computing centers. We are leveraging the widely-used [http://tau.uoregon.edu TAU], [http://icl.cs.utk.edu/papi/ PAPI], [http://www.scalasca.org/  Scalasca], and [http://perfsuite.ncsa.uiuc.edu/ PerfSuite] technologies as core components, improving them as necessary to meet user and application needs.&lt;br /&gt;
* [[The POINT of Performance|Project News Release]]&lt;br /&gt;
* [[Milestones|Project Milestones]] (members only)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Four major institutions are collaborating in this project: [http://www.uoregon.edu University of Oregon], [http://www.utk.edu University of Tennessee at Knoxville] and [http://www.ncsa.uiuc.edu National Center for Supercomputing Applications] are developing and integrating the performance tools. The [http://psc.edu Pittsburgh Supercomputing Center] is leading the application engagement and outreach effort.&lt;br /&gt;
&lt;br /&gt;
* [[People|Principal Researchers]]&lt;br /&gt;
&lt;br /&gt;
==SC'09==&lt;br /&gt;
The POINT team will have several events at this year's SC'09 Conference in Portland, OR, including a tutorial. More info will be on our [[News|News page]].&lt;br /&gt;
&lt;br /&gt;
== Contact ==&lt;br /&gt;
We would like to hear from anyone interested in the POINT project, if you have any questions or comments please [mailto:%70%6f%69%6e%74%40%6e%69%63%2e%75%6f%72%65%67%6f%6e%2e%65%64%75 send us an email].&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=218</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=218"/>
		<updated>2009-07-14T22:10:27Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#scalasca Scalasca 1.2 released]&lt;br /&gt;
Includes improved support for OpenMP &amp;amp; hybrid MPI/OpenMP codes, MPI File I/O analysis, PGI compilers,&lt;br /&gt;
Cray XT &amp;amp; NEC-SX, a new User Guide, and numerous other bug fixes &amp;amp; improvements.&lt;br /&gt;
 &lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#cube CUBE 3.2 released]&lt;br /&gt;
Stand-alone distribution of the graphical user interface component of Scalasca 1.2.&lt;br /&gt;
&lt;br /&gt;
==June 22 2009==&lt;br /&gt;
[http://tau.uoregon.edu/point.iso POINT LiveDVD] now points to the latest version of the LiveDVD used in POINT training workshops. It adds support for the&lt;br /&gt;
latest releases of PAPI, TAU, VampirTrace, and Scalasca and features&lt;br /&gt;
workshop examples for VampirTrace and Scalasca.&lt;br /&gt;
&lt;br /&gt;
==May 15 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.2 released]&lt;br /&gt;
&lt;br /&gt;
==March 4 2009==&lt;br /&gt;
[http://perfsuite.ncsa.uiuc.edu Perfsuite 1.0.0 alpha 1 released]&lt;br /&gt;
* Introduction of Java-based software into PerfSuite, with the first component being a Java package for programmatic access to data contained in PerfSuite-generated XML documents.  &lt;br /&gt;
&lt;br /&gt;
* PerfSuite can now generate output files suitable for use with the Cube visualization tool.  Cube is part of Scalasca, a set of open source software tools for scalable performance analysis.&lt;br /&gt;
&lt;br /&gt;
==January 23 2009==&lt;br /&gt;
[http://icl.cs.utk.edu/papi/news/news.html?id=203 Support for Intel Core i7 (Nehalem)]&lt;br /&gt;
PAPI now supports Intel's new Core i7 (Nehalem) processor. &lt;br /&gt;
&lt;br /&gt;
==January 22 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.1 released]&lt;br /&gt;
&lt;br /&gt;
TAU can now interface with PGI's runtime library and extract performance information associated with kernels that execute on the GPGPUs. TAU tracks the interactions with the GPGPU as seen from the host and generates the performance data. This data includes the name of the routine, file, line number as well as block and grid sizes and individual variable names. This feature works with PGI 8.0.3+ compilers that support the #acc region/end region directives. These source annotations may be placed around loops to automatically generate GPGPU code that executes on CUDA enabled NVidia cards. Users do not need to write any GPGPU specific code explicity. Instead, they use a compiler flag (-ta=nvidia) to generate this code using a special add-on package with the PGI compiler. &lt;br /&gt;
&lt;br /&gt;
This release improves support for Charm++ and NAMD. We have a [http://www.nic.uoregon.edu/tau-wiki/Guide:NAMDTAU wiki page] that describes how to build and use TAU with NAMD. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==November 10 2008==&lt;br /&gt;
The POINT team will have a large presence at [http://sc08.supercomputing.org/ SuperComputing 2008] including our full day [http://scyourway.nacse.org/conference/view/tut136 tutorial] Monday (8:30am-5:00pm) and our [http://scyourway.nacse.org/conference/view/bof127 BOF presentation] on Tuesday (12:15pm-1:15pm).&lt;br /&gt;
&lt;br /&gt;
You can also find our team at their respective booths:&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/NCSA National Center for Supercomputing Applications (booth 351)]&lt;br /&gt;
 * University of Oregon at the [http://scyourway.nacse.org/exhibits/view/NNSA_ASC NNSA ASC (booth 521)]&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/Pittsburgh_Supercomputing_Center Pittsburgh Supercomputing Center (booth 741)]&lt;br /&gt;
&lt;br /&gt;
==June 9th 2008==&lt;br /&gt;
We have a put together a [[Media:POINT.pdf|Poster (PDF)]] for [http://www.tacc.utexas.edu/tg08/ TeraGrid '08], it provides a good introduction to the POINT project.&lt;br /&gt;
&lt;br /&gt;
==March 21st 2008==&lt;br /&gt;
&lt;br /&gt;
HPC wire has written an article about our project, here is the [http://www.hpcwire.com/hpc/2232188.html link].&lt;br /&gt;
&lt;br /&gt;
==February 8th 2008==&lt;br /&gt;
The first iteration of the website has been deployed; take a look around and learn more about our project.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=217</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=217"/>
		<updated>2009-07-14T22:07:17Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#scalasca Scalasca 1.2 released]&lt;br /&gt;
Includes improved support for OpenMP &amp;amp; hybrid MPI/OpenMP codes, MPI File I/O analysis, PGI compilers,&lt;br /&gt;
Cray XT &amp;amp; NEC-SX, a new User Guide, and numerous other bug fixes &amp;amp; improvements.&lt;br /&gt;
 &lt;br /&gt;
==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#cube CUBE 3.2 released]&lt;br /&gt;
Stand-alone distribution of the graphical user interface component of Scalasca 1.2.&lt;br /&gt;
&lt;br /&gt;
==June 22 2009==&lt;br /&gt;
[http://tau.uoregon.edu/point.iso POINT LiveDVD] now points to the latest version of the LiveDVD used in POINT training workshops. It adds support for the&lt;br /&gt;
latest releases of PAPI, TAU, VampirTrace, and Scalasca and features&lt;br /&gt;
workshop examples for VampirTrace and Scalasca.&lt;br /&gt;
&lt;br /&gt;
==May 15 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.2 released]&lt;br /&gt;
&lt;br /&gt;
==March 4 2009==&lt;br /&gt;
[http://perfsuite.ncsa.uiuc.edu Perfsuite 1.0.0 alpha 1 released]&lt;br /&gt;
* Introduction of Java-based software into PerfSuite, with the first component being a Java package for programmatic access to data contained in PerfSuite-generated XML documents.  &lt;br /&gt;
&lt;br /&gt;
* PerfSuite can now generate output files suitable for use with the Cube visualization tool.  Cube is part of Scalasca, a set of open source software tools for scalable performance analysis.&lt;br /&gt;
&lt;br /&gt;
==January 23 2009==&lt;br /&gt;
[http://icl.cs.utk.edu/papi/news/news.html?id=203 Support for Intel Core i7 (Nehalem)]&lt;br /&gt;
PAPI now supports Intel's new Core i7 (Nehalem) processor. &lt;br /&gt;
&lt;br /&gt;
==January 22 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.1 released]&lt;br /&gt;
&lt;br /&gt;
TAU can now interface with PGI's runtime library and extract performance information associated with kernels that execute on the GPGPUs. TAU tracks the interactions with the GPGPU as seen from the host and generates the performance data. This data includes the name of the routine, file, line number as well as block and grid sizes and individual variable names. This feature works with PGI 8.0.3+ compilers that support the #acc region/end region directives. These source annotations may be placed around loops to automatically generate GPGPU code that executes on CUDA enabled NVidia cards. Users do not need to write any GPGPU specific code explicity. Instead, they use a compiler flag (-ta=nvidia) to generate this code using a special add-on package with the PGI compiler. &lt;br /&gt;
&lt;br /&gt;
This release improves support for Charm++ and NAMD. We have a [http://www.nic.uoregon.edu/tau-wiki/Guide:NAMDTAU wiki page] that describes how to build and use TAU with NAMD. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==November 10 2008==&lt;br /&gt;
The POINT team will have a large presence at [http://sc08.supercomputing.org/ SuperComputing 2008] including our full day [http://scyourway.nacse.org/conference/view/tut136 tutorial] Monday (8:30am-5:00pm) and our [http://scyourway.nacse.org/conference/view/bof127 BOF presentation] on Tuesday (12:15pm-1:15pm).&lt;br /&gt;
&lt;br /&gt;
You can also find our team at their respective booths:&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/NCSA National Center for Supercomputing Applications (booth 351)]&lt;br /&gt;
 * University of Oregon at the [http://scyourway.nacse.org/exhibits/view/NNSA_ASC NNSA ASC (booth 521)]&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/Pittsburgh_Supercomputing_Center Pittsburgh Supercomputing Center (booth 741)]&lt;br /&gt;
&lt;br /&gt;
==June 9th 2008==&lt;br /&gt;
We have a put together a [[Media:POINT.pdf|Poster (PDF)]] for [http://www.tacc.utexas.edu/tg08/ TeraGrid '08], it provides a good introduction to the POINT project.&lt;br /&gt;
&lt;br /&gt;
==March 21st 2008==&lt;br /&gt;
&lt;br /&gt;
HPC wire has written an article about our project, here is the [http://www.hpcwire.com/hpc/2232188.html link].&lt;br /&gt;
&lt;br /&gt;
==February 8th 2008==&lt;br /&gt;
The first iteration of the website has been deployed; take a look around and learn more about our project.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=216</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=216"/>
		<updated>2009-07-14T22:05:37Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#scalasca Scalasca 1.2 released]&lt;br /&gt;
Includes improved support for OpenMP &amp;amp; hybrid MPI/OpenMP codes, MPI File I/O analysis, PGI compilers,&lt;br /&gt;
Cray XT &amp;amp; NEC-SX, a new User Guide, and numerous other bug fixes &amp;amp; improvements.&lt;br /&gt;
 &lt;br /&gt;
==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#cube CUBE 3.2 released]&lt;br /&gt;
Stand-alone distribution of the graphical user interface component of Scalasca 1.2.&lt;br /&gt;
&lt;br /&gt;
==June 22 2009==&lt;br /&gt;
[http://tau.uoregon.edu/point.iso POINT LiveDVD] now points to the latest version of the LiveDVD used in POINT training workshops. It adds support for the&lt;br /&gt;
latest releases of PAPI, TAU, VampirTrace, and Scalasca and features&lt;br /&gt;
workshop examples for VampirTrace and Scalasca.&lt;br /&gt;
&lt;br /&gt;
==May 15 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.2 released]&lt;br /&gt;
&lt;br /&gt;
==March 4 2009==&lt;br /&gt;
[http://perfsuite.ncsa.uiuc.edu Perfsuite 1.0.0 alpha 1 released]&lt;br /&gt;
* Introduction of Java-based software into PerfSuite, with the first component being a Java package for programmatic access to data contained in PerfSuite-generated XML documents.  &lt;br /&gt;
&lt;br /&gt;
==January 23 2009==&lt;br /&gt;
[http://icl.cs.utk.edu/papi/news/news.html?id=203 Support for Intel Core i7 (Nehalem)]&lt;br /&gt;
PAPI now supports Intel's new Core i7 (Nehalem) processor. &lt;br /&gt;
&lt;br /&gt;
==January 22 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.1 released]&lt;br /&gt;
&lt;br /&gt;
TAU can now interface with PGI's runtime library and extract performance information associated with kernels that execute on the GPGPUs. TAU tracks the interactions with the GPGPU as seen from the host and generates the performance data. This data includes the name of the routine, file, line number as well as block and grid sizes and individual variable names. This feature works with PGI 8.0.3+ compilers that support the #acc region/end region directives. These source annotations may be placed around loops to automatically generate GPGPU code that executes on CUDA enabled NVidia cards. Users do not need to write any GPGPU specific code explicity. Instead, they use a compiler flag (-ta=nvidia) to generate this code using a special add-on package with the PGI compiler. &lt;br /&gt;
&lt;br /&gt;
This release improves support for Charm++ and NAMD. We have a [http://www.nic.uoregon.edu/tau-wiki/Guide:NAMDTAU wiki page] that describes how to build and use TAU with NAMD. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==November 10 2008==&lt;br /&gt;
The POINT team will have a large presence at [http://sc08.supercomputing.org/ SuperComputing 2008] including our full day [http://scyourway.nacse.org/conference/view/tut136 tutorial] Monday (8:30am-5:00pm) and our [http://scyourway.nacse.org/conference/view/bof127 BOF presentation] on Tuesday (12:15pm-1:15pm).&lt;br /&gt;
&lt;br /&gt;
You can also find our team at their respective booths:&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/NCSA National Center for Supercomputing Applications (booth 351)]&lt;br /&gt;
 * University of Oregon at the [http://scyourway.nacse.org/exhibits/view/NNSA_ASC NNSA ASC (booth 521)]&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/Pittsburgh_Supercomputing_Center Pittsburgh Supercomputing Center (booth 741)]&lt;br /&gt;
&lt;br /&gt;
==June 9th 2008==&lt;br /&gt;
We have a put together a [[Media:POINT.pdf|Poster (PDF)]] for [http://www.tacc.utexas.edu/tg08/ TeraGrid '08], it provides a good introduction to the POINT project.&lt;br /&gt;
&lt;br /&gt;
==March 21st 2008==&lt;br /&gt;
&lt;br /&gt;
HPC wire has written an article about our project, here is the [http://www.hpcwire.com/hpc/2232188.html link].&lt;br /&gt;
&lt;br /&gt;
==February 8th 2008==&lt;br /&gt;
The first iteration of the website has been deployed; take a look around and learn more about our project.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=215</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=215"/>
		<updated>2009-07-14T22:04:34Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#scalasca Scalasca 1.2 released]&lt;br /&gt;
Includes improved support for OpenMP &amp;amp; hybrid MPI/OpenMP codes, MPI File I/O analysis, PGI compilers,&lt;br /&gt;
Cray XT &amp;amp; NEC-SX, a new User Guide, and numerous other bug fixes &amp;amp; improvements.&lt;br /&gt;
 &lt;br /&gt;
==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#cube CUBE 3.2 released]&lt;br /&gt;
Stand-alone distribution of the graphical user interface component of Scalasca 1.2.&lt;br /&gt;
&lt;br /&gt;
==June 22 2009==&lt;br /&gt;
[http://tau.uoregon.edu/point.iso POINT LiveDVD] now points to the latest version of the LiveDVD used in POINT training workshops. It adds support for the&lt;br /&gt;
latest releases of PAPI, TAU, VampirTrace, and Scalasca and features&lt;br /&gt;
workshop examples for VampirTrace and Scalasca.&lt;br /&gt;
&lt;br /&gt;
==May 15 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.2 released]&lt;br /&gt;
&lt;br /&gt;
==March 4 2009==&lt;br /&gt;
[http://perfsuite.ncsa.uiuc.edu Perfsuite 1.0.0 alpha 1 released]&lt;br /&gt;
* Introduction of Java-based software into PerfSuite, with the first&lt;br /&gt;
	  component being a Java package for programmatic access to data&lt;br /&gt;
	  contained in PerfSuite-generated XML documents.  &lt;br /&gt;
&lt;br /&gt;
==January 23 2009==&lt;br /&gt;
[http://icl.cs.utk.edu/papi/news/news.html?id=203 Support for Intel Core i7 (Nehalem)]&lt;br /&gt;
PAPI now supports Intel's new Core i7 (Nehalem) processor. &lt;br /&gt;
&lt;br /&gt;
==January 22 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.1 released]&lt;br /&gt;
&lt;br /&gt;
TAU can now interface with PGI's runtime library and extract performance information associated with kernels that execute on the GPGPUs. TAU tracks the interactions with the GPGPU as seen from the host and generates the performance data. This data includes the name of the routine, file, line number as well as block and grid sizes and individual variable names. This feature works with PGI 8.0.3+ compilers that support the #acc region/end region directives. These source annotations may be placed around loops to automatically generate GPGPU code that executes on CUDA enabled NVidia cards. Users do not need to write any GPGPU specific code explicity. Instead, they use a compiler flag (-ta=nvidia) to generate this code using a special add-on package with the PGI compiler. &lt;br /&gt;
&lt;br /&gt;
This release improves support for Charm++ and NAMD. We have a [http://www.nic.uoregon.edu/tau-wiki/Guide:NAMDTAU wiki page] that describes how to build and use TAU with NAMD. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==November 10 2008==&lt;br /&gt;
The POINT team will have a large presence at [http://sc08.supercomputing.org/ SuperComputing 2008] including our full day [http://scyourway.nacse.org/conference/view/tut136 tutorial] Monday (8:30am-5:00pm) and our [http://scyourway.nacse.org/conference/view/bof127 BOF presentation] on Tuesday (12:15pm-1:15pm).&lt;br /&gt;
&lt;br /&gt;
You can also find our team at their respective booths:&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/NCSA National Center for Supercomputing Applications (booth 351)]&lt;br /&gt;
 * University of Oregon at the [http://scyourway.nacse.org/exhibits/view/NNSA_ASC NNSA ASC (booth 521)]&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/Pittsburgh_Supercomputing_Center Pittsburgh Supercomputing Center (booth 741)]&lt;br /&gt;
&lt;br /&gt;
==June 9th 2008==&lt;br /&gt;
We have a put together a [[Media:POINT.pdf|Poster (PDF)]] for [http://www.tacc.utexas.edu/tg08/ TeraGrid '08], it provides a good introduction to the POINT project.&lt;br /&gt;
&lt;br /&gt;
==March 21st 2008==&lt;br /&gt;
&lt;br /&gt;
HPC wire has written an article about our project, here is the [http://www.hpcwire.com/hpc/2232188.html link].&lt;br /&gt;
&lt;br /&gt;
==February 8th 2008==&lt;br /&gt;
The first iteration of the website has been deployed; take a look around and learn more about our project.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=214</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=214"/>
		<updated>2009-07-14T22:00:06Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#scalasca Scalasca 1.2 released]&lt;br /&gt;
Includes improved support for OpenMP &amp;amp; hybrid MPI/OpenMP codes, MPI File I/O analysis, PGI compilers,&lt;br /&gt;
Cray XT &amp;amp; NEC-SX, a new User Guide, and numerous other bug fixes &amp;amp; improvements.&lt;br /&gt;
 &lt;br /&gt;
==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#cube CUBE 3.2 released]&lt;br /&gt;
Stand-alone distribution of the graphical user interface component of Scalasca 1.2.&lt;br /&gt;
&lt;br /&gt;
==June 22 2009==&lt;br /&gt;
[http://tau.uoregon.edu/point.iso POINT LiveDVD] now points to the latest version of the LiveDVD used in POINT training workshops. It adds support for the&lt;br /&gt;
latest releases of PAPI, TAU, VampirTrace, and Scalasca and features&lt;br /&gt;
workshop examples for VampirTrace and Scalasca.&lt;br /&gt;
&lt;br /&gt;
==May 15 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.2 released]&lt;br /&gt;
&lt;br /&gt;
==January 23 2009==&lt;br /&gt;
[http://icl.cs.utk.edu/papi/news/news.html?id=203 Support for Intel Core i7 (Nehalem)]&lt;br /&gt;
PAPI now supports Intel's new Core i7 (Nehalem) processor. &lt;br /&gt;
&lt;br /&gt;
==January 22 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.1 released]&lt;br /&gt;
&lt;br /&gt;
TAU can now interface with PGI's runtime library and extract performance information associated with kernels that execute on the GPGPUs. TAU tracks the interactions with the GPGPU as seen from the host and generates the performance data. This data includes the name of the routine, file, line number as well as block and grid sizes and individual variable names. This feature works with PGI 8.0.3+ compilers that support the #acc region/end region directives. These source annotations may be placed around loops to automatically generate GPGPU code that executes on CUDA enabled NVidia cards. Users do not need to write any GPGPU specific code explicity. Instead, they use a compiler flag (-ta=nvidia) to generate this code using a special add-on package with the PGI compiler. &lt;br /&gt;
&lt;br /&gt;
This release improves support for Charm++ and NAMD. We have a [http://www.nic.uoregon.edu/tau-wiki/Guide:NAMDTAU wiki page] that describes how to build and use TAU with NAMD. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==November 10 2008==&lt;br /&gt;
The POINT team will have a large presence at [http://sc08.supercomputing.org/ SuperComputing 2008] including our full day [http://scyourway.nacse.org/conference/view/tut136 tutorial] Monday (8:30am-5:00pm) and our [http://scyourway.nacse.org/conference/view/bof127 BOF presentation] on Tuesday (12:15pm-1:15pm).&lt;br /&gt;
&lt;br /&gt;
You can also find our team at their respective booths:&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/NCSA National Center for Supercomputing Applications (booth 351)]&lt;br /&gt;
 * University of Oregon at the [http://scyourway.nacse.org/exhibits/view/NNSA_ASC NNSA ASC (booth 521)]&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/Pittsburgh_Supercomputing_Center Pittsburgh Supercomputing Center (booth 741)]&lt;br /&gt;
&lt;br /&gt;
==June 9th 2008==&lt;br /&gt;
We have a put together a [[Media:POINT.pdf|Poster (PDF)]] for [http://www.tacc.utexas.edu/tg08/ TeraGrid '08], it provides a good introduction to the POINT project.&lt;br /&gt;
&lt;br /&gt;
==March 21st 2008==&lt;br /&gt;
&lt;br /&gt;
HPC wire has written an article about our project, here is the [http://www.hpcwire.com/hpc/2232188.html link].&lt;br /&gt;
&lt;br /&gt;
==February 8th 2008==&lt;br /&gt;
The first iteration of the website has been deployed; take a look around and learn more about our project.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=213</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=213"/>
		<updated>2009-07-14T21:53:54Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#scalasca Scalasca 1.2 released]&lt;br /&gt;
Includes improved support for OpenMP &amp;amp; hybrid MPI/OpenMP codes, MPI File I/O analysis, PGI compilers,&lt;br /&gt;
Cray XT &amp;amp; NEC-SX, a new User Guide, and numerous other bug fixes &amp;amp; improvements.&lt;br /&gt;
 &lt;br /&gt;
==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#cube CUBE 3.2 released]&lt;br /&gt;
Stand-alone distribution of the graphical user interface component of Scalasca 1.2.&lt;br /&gt;
&lt;br /&gt;
==January 23 2009==&lt;br /&gt;
[http://icl.cs.utk.edu/papi/news/news.html?id=203 Support for Intel Core i7 (Nehalem)]&lt;br /&gt;
PAPI now supports Intel's new Core i7 (Nehalem) processor. &lt;br /&gt;
&lt;br /&gt;
==January 22 2009==&lt;br /&gt;
[http://www.cs.uoregon.edu/research/tau/news.php TAU v2.18.1 released]&lt;br /&gt;
&lt;br /&gt;
TAU can now interface with PGI's runtime library and extract performance information associated with kernels that execute on the GPGPUs. TAU tracks the interactions with the GPGPU as seen from the host and generates the performance data. This data includes the name of the routine, file, line number as well as block and grid sizes and individual variable names. This feature works with PGI 8.0.3+ compilers that support the #acc region/end region directives. These source annotations may be placed around loops to automatically generate GPGPU code that executes on CUDA enabled NVidia cards. Users do not need to write any GPGPU specific code explicity. Instead, they use a compiler flag (-ta=nvidia) to generate this code using a special add-on package with the PGI compiler. &lt;br /&gt;
&lt;br /&gt;
This release improves support for Charm++ and NAMD. We have a [http://www.nic.uoregon.edu/tau-wiki/Guide:NAMDTAU wiki page] that describes how to build and use TAU with NAMD. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==November 10 2008==&lt;br /&gt;
The POINT team will have a large presence at [http://sc08.supercomputing.org/ SuperComputing 2008] including our full day [http://scyourway.nacse.org/conference/view/tut136 tutorial] Monday (8:30am-5:00pm) and our [http://scyourway.nacse.org/conference/view/bof127 BOF presentation] on Tuesday (12:15pm-1:15pm).&lt;br /&gt;
&lt;br /&gt;
You can also find our team at their respective booths:&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/NCSA National Center for Supercomputing Applications (booth 351)]&lt;br /&gt;
 * University of Oregon at the [http://scyourway.nacse.org/exhibits/view/NNSA_ASC NNSA ASC (booth 521)]&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/Pittsburgh_Supercomputing_Center Pittsburgh Supercomputing Center (booth 741)]&lt;br /&gt;
&lt;br /&gt;
==June 9th 2008==&lt;br /&gt;
We have a put together a [[Media:POINT.pdf|Poster (PDF)]] for [http://www.tacc.utexas.edu/tg08/ TeraGrid '08], it provides a good introduction to the POINT project.&lt;br /&gt;
&lt;br /&gt;
==March 21st 2008==&lt;br /&gt;
&lt;br /&gt;
HPC wire has written an article about our project, here is the [http://www.hpcwire.com/hpc/2232188.html link].&lt;br /&gt;
&lt;br /&gt;
==February 8th 2008==&lt;br /&gt;
The first iteration of the website has been deployed; take a look around and learn more about our project.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=212</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=212"/>
		<updated>2009-07-14T21:49:54Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#scalasca Scalasca 1.2 released]&lt;br /&gt;
Includes improved support for OpenMP &amp;amp; hybrid MPI/OpenMP codes, MPI File I/O analysis, PGI compilers,&lt;br /&gt;
Cray XT &amp;amp; NEC-SX, a new User Guide, and numerous other bug fixes &amp;amp; improvements.&lt;br /&gt;
 &lt;br /&gt;
==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#cube CUBE 3.2 released]&lt;br /&gt;
Stand-alone distribution of the graphical user interface component of Scalasca 1.2.&lt;br /&gt;
&lt;br /&gt;
==January 23 2009==&lt;br /&gt;
[http://icl.cs.utk.edu/papi/news/news.html?id=203 Support for Intel Core i7 (Nehalem)]&lt;br /&gt;
PAPI now supports Intel's new Core i7 (Nehalem) processor. &lt;br /&gt;
 &lt;br /&gt;
==November 10 2008==&lt;br /&gt;
The POINT team will have a large presence at [http://sc08.supercomputing.org/ SuperComputing 2008] including our full day [http://scyourway.nacse.org/conference/view/tut136 tutorial] Monday (8:30am-5:00pm) and our [http://scyourway.nacse.org/conference/view/bof127 BOF presentation] on Tuesday (12:15pm-1:15pm).&lt;br /&gt;
&lt;br /&gt;
You can also find our team at their respective booths:&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/NCSA National Center for Supercomputing Applications (booth 351)]&lt;br /&gt;
 * University of Oregon at the [http://scyourway.nacse.org/exhibits/view/NNSA_ASC NNSA ASC (booth 521)]&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/Pittsburgh_Supercomputing_Center Pittsburgh Supercomputing Center (booth 741)]&lt;br /&gt;
&lt;br /&gt;
==June 9th 2008==&lt;br /&gt;
We have a put together a [[Media:POINT.pdf|Poster (PDF)]] for [http://www.tacc.utexas.edu/tg08/ TeraGrid '08], it provides a good introduction to the POINT project.&lt;br /&gt;
&lt;br /&gt;
==March 21st 2008==&lt;br /&gt;
&lt;br /&gt;
HPC wire has written an article about our project, here is the [http://www.hpcwire.com/hpc/2232188.html link].&lt;br /&gt;
&lt;br /&gt;
==February 8th 2008==&lt;br /&gt;
The first iteration of the website has been deployed; take a look around and learn more about our project.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=211</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=211"/>
		<updated>2009-07-14T21:46:05Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#scalasca Scalasca 1.2 released]&lt;br /&gt;
Includes improved support for OpenMP &amp;amp; hybrid MPI/OpenMP codes, MPI File I/O analysis, PGI compilers,&lt;br /&gt;
Cray XT &amp;amp; NEC-SX, a new User Guide, and numerous other bug fixes &amp;amp; improvements.&lt;br /&gt;
 &lt;br /&gt;
==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#cube CUBE 3.2 released]&lt;br /&gt;
Stand-alone distribution of the graphical user interface component of Scalasca 1.2.&lt;br /&gt;
 &lt;br /&gt;
==November 10 2008==&lt;br /&gt;
The POINT team will have a large presence at [http://sc08.supercomputing.org/ SuperComputing 2008] including our full day [http://scyourway.nacse.org/conference/view/tut136 tutorial] Monday (8:30am-5:00pm) and our [http://scyourway.nacse.org/conference/view/bof127 BOF presentation] on Tuesday (12:15pm-1:15pm).&lt;br /&gt;
&lt;br /&gt;
You can also find our team at their respective booths:&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/NCSA National Center for Supercomputing Applications (booth 351)]&lt;br /&gt;
 * University of Oregon at the [http://scyourway.nacse.org/exhibits/view/NNSA_ASC NNSA ASC (booth 521)]&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/Pittsburgh_Supercomputing_Center Pittsburgh Supercomputing Center (booth 741)]&lt;br /&gt;
&lt;br /&gt;
==June 9th 2008==&lt;br /&gt;
We have a put together a [[Media:POINT.pdf|Poster (PDF)]] for [http://www.tacc.utexas.edu/tg08/ TeraGrid '08], it provides a good introduction to the POINT project.&lt;br /&gt;
&lt;br /&gt;
==March 21st 2008==&lt;br /&gt;
&lt;br /&gt;
HPC wire has written an article about our project, here is the [http://www.hpcwire.com/hpc/2232188.html link].&lt;br /&gt;
&lt;br /&gt;
==February 8th 2008==&lt;br /&gt;
The first iteration of the website has been deployed; take a look around and learn more about our project.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=210</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=News&amp;diff=210"/>
		<updated>2009-07-14T21:45:17Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==July 10 2009==&lt;br /&gt;
 [http://www.fz-juelich.de/jsc/scalasca/software/download#scalasca Scalasca 1.2 released]&lt;br /&gt;
Includes improved support for OpenMP &amp;amp; hybrid MPI/OpenMP codes, MPI File I/O analysis, PGI compilers,&lt;br /&gt;
Cray XT &amp;amp; NEC-SX, a new User Guide, and numerous other bug fixes &amp;amp; improvements.&lt;br /&gt;
 &lt;br /&gt;
==July 10 2009==&lt;br /&gt;
[http://www.fz-juelich.de/jsc/scalasca/software/download#cube CUBE 3.2 released]&lt;br /&gt;
Stand-alone distribution of the graphical user interface component of Scalasca 1.2.&lt;br /&gt;
 &lt;br /&gt;
==November 10 2008==&lt;br /&gt;
The POINT team will have a large presence at [http://sc08.supercomputing.org/ SuperComputing 2008] including our full day [http://scyourway.nacse.org/conference/view/tut136 tutorial] Monday (8:30am-5:00pm) and our [http://scyourway.nacse.org/conference/view/bof127 BOF presentation] on Tuesday (12:15pm-1:15pm).&lt;br /&gt;
&lt;br /&gt;
You can also find our team at their respective booths:&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/NCSA National Center for Supercomputing Applications (booth 351)]&lt;br /&gt;
 * University of Oregon at the [http://scyourway.nacse.org/exhibits/view/NNSA_ASC NNSA ASC (booth 521)]&lt;br /&gt;
 * [http://scyourway.nacse.org/exhibits/view/Pittsburgh_Supercomputing_Center Pittsburgh Supercomputing Center (booth 741)]&lt;br /&gt;
&lt;br /&gt;
==June 9th 2008==&lt;br /&gt;
We have a put together a [[Media:POINT.pdf|Poster (PDF)]] for [http://www.tacc.utexas.edu/tg08/ TeraGrid '08], it provides a good introduction to the POINT project.&lt;br /&gt;
&lt;br /&gt;
==March 21st 2008==&lt;br /&gt;
&lt;br /&gt;
HPC wire has written an article about our project, here is the [http://www.hpcwire.com/hpc/2232188.html link].&lt;br /&gt;
&lt;br /&gt;
==February 8th 2008==&lt;br /&gt;
The first iteration of the website has been deployed; take a look around and learn more about our project.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=TeraGrid_Support&amp;diff=209</id>
		<title>TeraGrid Support</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=TeraGrid_Support&amp;diff=209"/>
		<updated>2009-07-14T21:31:09Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In recognition of its highly effective user support record, [http://www.psc.edu PSC] has been tasked with coordinating user support for the entire TeraGrid. The PSC team is introducing the productivity tools suite developed by the project to user support teams at all nine TeraGrid resource provider sites and is leveraging the TeraGrid’s EOT program to deliver talks, tutorials, and MSI activities.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=NAMDPerformance&amp;diff=208</id>
		<title>NAMDPerformance</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=NAMDPerformance&amp;diff=208"/>
		<updated>2009-07-14T21:18:42Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=NAMD Performance Study=&lt;br /&gt;
&lt;br /&gt;
NAMD is written in [http://charm.cs.uiuc.edu/ Charm++] and thus has some unique attributes when profiled by TAU. For example the Charm++ scheduler, which assigns task to processors and helps in load balancing the program, has a notion of idling while waiting for tasks to complete. Thus TAU creates an event to capture time spent when the scheduler is in its idle state (Idle) as well as an event (Main) to account for the communication latencies. You can see how NAMD performs on different hardware with these charts:&lt;br /&gt;
&lt;br /&gt;
[[Image:intrepid-ranger-breakdown.png]]&lt;br /&gt;
&lt;br /&gt;
Whereas on Intrepid (BlueGene P) Idle time (red) increases as NAMD scales, on Ranger (Sun x86 cluster) Main time increases (blue). This shows how Ranger's relatively slower communications layer results in larger latencies as NAMD scales compared to how NAMD scales on Intrepid.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ability of NAMD to scale to a large number of processors is highly dependent on how it is configured. Many options are provided to tweak NAMD's performance structure to optimize performance for different simulation parameters and machines. So instead of focusing on NAMD's scaling behavior, we showed how TAU can identify other performance aspects of NAMD. This chart shows the the increasing variation across processors for varius NAMD events. Notice how after each load balancing phase the divergence among processors is temporally arrested.&lt;br /&gt;
&lt;br /&gt;
[[Image:namd-deviation-snapshot.png|800px]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=NAMDPerformance&amp;diff=207</id>
		<title>NAMDPerformance</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=NAMDPerformance&amp;diff=207"/>
		<updated>2009-07-14T21:15:37Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=NAMD Performance Study=&lt;br /&gt;
&lt;br /&gt;
NAMD is written in [http://charm.cs.uiuc.edu/ Charm++] and thus has some unique attributes when profiled by TAU. For example the Charm++ scheduler, which assigns task to processors and helps in load balancing the program, has a notion of idling while waiting for tasks to complete. Thus TAU creates a event to capture time spent when the scheduler is in its idle state (Idle) as well as a event (Main) to account for the communication latencies. You can see how NAMD performs on different hardware with these charts:&lt;br /&gt;
&lt;br /&gt;
[[Image:intrepid-ranger-breakdown.png]]&lt;br /&gt;
&lt;br /&gt;
Where on Intrepid (BlueGene P) Idle time (red) increases as NAMD scales on Ranger (Sun x86 cluster) Main time increases (blue). This shows how Rangers relatively slower communications layer results in larger latencies as NAMD scales compared to how NAMD scales on Intrepid.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ability for NAMD to scale to a large number of processors is highly dependent on how it is configured. Many options are provided to tweak NAMD's performance structure to optimize performance for different simulation parameters and machines. So insteed of focusing on NAMD's scaling behavior we showed how TAU can identify other performance aspects of NAMD. This chart shows the the incressing variation across processors for varius NAMD events. Notice how after each load balancing phase the divergence among processors is temporally arrested.&lt;br /&gt;
&lt;br /&gt;
[[Image:namd-deviation-snapshot.png|800px]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=NAMDPerformance&amp;diff=206</id>
		<title>NAMDPerformance</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=NAMDPerformance&amp;diff=206"/>
		<updated>2009-07-14T21:14:55Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* NAMD Performance Study */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=NAMD Performance Study=&lt;br /&gt;
&lt;br /&gt;
NAMD is written in [http://charm.cs.uiuc.edu/ charm++] and thus has some unique attributes when profiled by TAU. For example the Charm++ scheduler, which assigns task to processors and helps in load balancing the program, has a notion of idling while waiting for tasks to complete. Thus TAU creates a event to capture time spent when the scheduler is in its idle state (Idle) as well as a event (Main) to account for the communication latencies. You can see how NAMD performs on different hardware with these charts:&lt;br /&gt;
&lt;br /&gt;
[[Image:intrepid-ranger-breakdown.png]]&lt;br /&gt;
&lt;br /&gt;
Where on Intrepid (BlueGene P) Idle time (red) increases as NAMD scales on Ranger (Sun x86 cluster) Main time increases (blue). This shows how Rangers relatively slower communications layer results in larger latencies as NAMD scales compared to how NAMD scales on Intrepid.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ability for NAMD to scale to a large number of processors is highly dependent on how it is configured. Many options are provided to tweak NAMD's performance structure to optimize performance for different simulation parameters and machines. So insteed of focusing on NAMD's scaling behavior we showed how TAU can identify other performance aspects of NAMD. This chart shows the the incressing variation across processors for varius NAMD events. Notice how after each load balancing phase the divergence among processors is temporally arrested.&lt;br /&gt;
&lt;br /&gt;
[[Image:namd-deviation-snapshot.png|800px]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Scientific_Applications&amp;diff=205</id>
		<title>Scientific Applications</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Scientific_Applications&amp;diff=205"/>
		<updated>2009-07-14T21:13:36Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* [http://www.ks.uiuc.edu/Research/namd/ NAMD] */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
== [http://cobweb.ecn.purdue.edu/~gekco/nemo3D/ NEMO3D] ==&lt;br /&gt;
NEMO3D calculates eigenstates in (almost) arbitrarily shaped semiconductor structures in the typical column IV and III-V materials. It has been running as an educational version in [https://www.nanohub.org/ nanoHUB] for a year with executions that take a few seconds. This version has been used by over 600 people. It is expected to be running soon on large systems with executions that will require hours of CPU time with hundreds of users. The code is currently being ported to the dual-core Cray XT3 at [http://www.psc.edu PSC].&lt;br /&gt;
* [[NEMO3D | Performance Results]]&lt;br /&gt;
&lt;br /&gt;
== [http://lca.ucsd.edu/portal/software/enzo ENZO] ==&lt;br /&gt;
ENZO is an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation. Understanding the performance of AMR applications on distributed memory architectures is challenging, due to the dynamic multilevel data structures and variety of communication patterns involved.&lt;br /&gt;
* [[ENZO | Performance Results]]&lt;br /&gt;
&lt;br /&gt;
== [http://www.ks.uiuc.edu/Research/namd/ NAMD] ==&lt;br /&gt;
Development of NAMD is a collaborative effort between the Theoretical and Computational Biophysics Group (TCBG) and the Parallel Programming Laboratory (PPL) at UIUC and is based on PPL’s Charm++ parallel programming system, which has extensive support for latency tolerance and dynamic load balancing. Efficient, lightweight communication is critical for Charm++ and the applications built within this framework.&lt;br /&gt;
* [[NAMDPerformance | Performance Results]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Scientific_Applications&amp;diff=204</id>
		<title>Scientific Applications</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Scientific_Applications&amp;diff=204"/>
		<updated>2009-07-14T21:01:10Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
== [http://cobweb.ecn.purdue.edu/~gekco/nemo3D/ NEMO3D] ==&lt;br /&gt;
NEMO3D calculates eigenstates in (almost) arbitrarily shaped semiconductor structures in the typical column IV and III-V materials. It has been running as an educational version in [https://www.nanohub.org/ nanoHUB] for a year with executions that take a few seconds. This version has been used by over 600 people. It is expected to be running soon on large systems with executions that will require hours of CPU time with hundreds of users. The code is currently being ported to the dual-core Cray XT3 at [http://www.psc.edu PSC].&lt;br /&gt;
* [[NEMO3D | Performance Results]]&lt;br /&gt;
&lt;br /&gt;
== [http://lca.ucsd.edu/portal/software/enzo ENZO] ==&lt;br /&gt;
ENZO is an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation. Understanding the performance of AMR applications on distributed memory architectures is challenging, due to the dynamic multilevel data structures and variety of communication patterns involved.&lt;br /&gt;
* [[ENZO | Performance Results]]&lt;br /&gt;
&lt;br /&gt;
== [http://www.ks.uiuc.edu/Research/namd/ NAMD] ==&lt;br /&gt;
Development of NAMD is a collaborative effort between the Theoretical and Computational biophysics Group (TCBG) and the Parallel Programming Laboratory (PPL) at UIUC and is based on PPL’s Charm++ parallel programming system, which has extensive support for latency tolerance and dynamic load balancing. Efficient, lightweight communication is critical for Charm++ and the applications built within this framework.&lt;br /&gt;
* [[NAMDPerformance | Performance Results]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Performance_Tools&amp;diff=203</id>
		<title>Performance Tools</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Performance_Tools&amp;diff=203"/>
		<updated>2009-07-14T21:00:15Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== [http://tau.uoregon.edu TAU (Tuning and Analysis Utilities)] ==&lt;br /&gt;
TAU Performance System is a portable profiling and tracing toolkit for performance analysis of parallel programs written in Fortran, C, C++, Java, Python. Applications can be instrumented at the source code level using an automatic instrumentor tool based on the [http://www.cs.uoregon.edu/research/pdt PDT (Program Database Toolkit)], dynamically using [http://www.dyninst.org/ DyninstAPI], at runtime in the Java virtual machine, or manually using the instrumentation API.&lt;br /&gt;
====Learn about TAU====&lt;br /&gt;
* S. Shende and A. D. Malony, [http://www.cs.uoregon.edu/research/paracomp/publ/htbin/bibify.cgi?cmd=show&amp;amp;coll=JOUR&amp;amp;id=ijhpca05.tau&amp;amp;data_present=no &amp;quot;The TAU Parallel Performance System&amp;quot;] International Journal of High Performance Computing Applications, SAGE Publications, 20(2):287-331, Summer 2006&lt;br /&gt;
&lt;br /&gt;
* [http://tau.uoregon.edu/ Visit] TAU's website for more information.&lt;br /&gt;
&lt;br /&gt;
== [http://icl.cs.utk.edu/papi/ PAPI (Performance Application Programming Interface)] ==&lt;br /&gt;
PAPI aims to provide the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see relationships between software performance and processor events.&lt;br /&gt;
&lt;br /&gt;
==== Learn about PAPI ====&lt;br /&gt;
* Browne, S., Dongarra, J., Garner, N., Ho, G., Mucci, P.[http://icl.cs.utk.edu/publications/pub-papers/2000/papi-journal-final.pdf &amp;quot;A Portable Programming Interface for Performance Evaluation on Modern Processors&amp;quot;] The International Journal of High Performance Computing Applications, Volume 14, number 3, pp. 189-204, Fall 2000.&lt;br /&gt;
* [http://icl.cs.utk.edu/papi/ Visit] PAPI's website for more information.&lt;br /&gt;
&lt;br /&gt;
== [http://www.scalasca.org/ Scalasca] ==&lt;br /&gt;
Scalasca is a scalable performance-analysis tool for parallel applications supporting the programming models MPI, OpenMP, SHMEM, and combinations thereof. Its functionality addresses the entire analysis process including instrumentation, parallel post-processing of performance data, and result presentation. It is based on the idea of automatically searching event traces of parallel applications for execution patterns indicating inefficient behavior. The patterns are classified by category and their significance is quantified for every program phase and system resource involved. The results are made available to the user in a flexible graphical user interface, where they can be investigated on varying levels of granularity.&lt;br /&gt;
&lt;br /&gt;
==== Learn about Scalasca ====&lt;br /&gt;
&lt;br /&gt;
* F. Wolf, B. J. N. Wylie, E. Ábrahám, D. Becker, W. Frings, K. Fürlinger, M. Geimer, M.-A. Hermanns, B. Mohr, S. Moore, M. Pfeifer, Z. Szebeny[http://www.fz-juelich.de/jsc/datapool/KojakPubs/hlrs_ptw08.pdf &amp;quot;Usage of the SCALASCA Toolset for Scalable Performance Analysis of Large-Scale Parallel Applications&amp;quot;]  Proc. 2nd HLRS Parallel Tools Workshop, pp. 157-167, Stuttgart, Germany, July 2008. &lt;br /&gt;
* [http://www.scalasca.org/ Visit] Scalasca's website for more information.&lt;br /&gt;
&lt;br /&gt;
== [http://perfsuite.ncsa.uiuc.edu/ PerfSuite] ==&lt;br /&gt;
PerfSuite is a collection of tools, utilities, and libraries for software performance analysis where the primary design goals are ease of use, comprehensibility, interoperability, and simplicity. This software can provide a good &amp;quot;entry point&amp;quot; for more detailed performance analysis and can help point the way towards selecting other tools and/or techniques using more specialized software if necessary (for example, tools/libraries from academic research groups or third-party commercial software).&lt;br /&gt;
&lt;br /&gt;
==== Learn about PerfSuite ====&lt;br /&gt;
&lt;br /&gt;
* [http://perfsuite.ncsa.uiuc.edu/publications/LCI-2005.pdf &amp;quot;PerfSuite: An Accessible, Open Source Performance Analysis Environment for Linux&amp;quot;]. 6th International Conference on Linux Clusters: The HPC Revolution 2005. Chapel Hill, NC. April 2005.&lt;br /&gt;
* [http://perfsuite.ncsa.uiuc.edu/ Visit] PerfSuite's website for more information.&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Performance_Tools&amp;diff=202</id>
		<title>Performance Tools</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Performance_Tools&amp;diff=202"/>
		<updated>2009-07-14T20:59:26Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* [http://icl.cs.utk.edu/papi/ PAPI (Performance Application Programming Interface)] */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== [http://tau.uoregon.edu TAU (Tuning and Analysis Utilities)] ==&lt;br /&gt;
TAU Performance System is a portable profiling and tracing toolkit for performance analysis of parallel programs written in Fortran, C, C++, Java, Python. Applications can be instrumented at the source code level using an automatic instrumentor tool based on the [http://www.cs.uoregon.edu/research/pdt PDT (Program Database Toolkit)], dynamically using [http://www.dyninst.org/ DyninstAPI], at runtime in the Java virtual machine, or manually using the instrumentation API.&lt;br /&gt;
====Learn about TAU====&lt;br /&gt;
* S. Shende and A. D. Malony, [http://www.cs.uoregon.edu/research/paracomp/publ/htbin/bibify.cgi?cmd=show&amp;amp;coll=JOUR&amp;amp;id=ijhpca05.tau&amp;amp;data_present=no &amp;quot;The TAU Parallel Performance System&amp;quot;] International Journal of High Performance Computing Applications, SAGE Publications, 20(2):287-331, Summer 2006&lt;br /&gt;
&lt;br /&gt;
* [http://tau.uoregon.edu/ Visit] TAU's website for more information.&lt;br /&gt;
&lt;br /&gt;
== [http://icl.cs.utk.edu/papi/ PAPI (Performance Application Programming Interface)] ==&lt;br /&gt;
PAPI aims to provide the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see relationships between software performance and processor events.&lt;br /&gt;
&lt;br /&gt;
==== Learn about PAPI ====&lt;br /&gt;
* Browne, S., Dongarra, J., Garner, N., Ho, G., Mucci, P.[http://icl.cs.utk.edu/publications/pub-papers/2000/papi-journal-final.pdf &amp;quot;A Portable Programming Interface for Performance Evaluation on Modern Processors&amp;quot;] The International Journal of High Performance Computing Applications, Volume 14, number 3, pp. 189-204, Fall 2000.&lt;br /&gt;
* [http://icl.cs.utk.edu/papi/ Visit] PAPI's website for more information.&lt;br /&gt;
&lt;br /&gt;
== [http://www.scalasca.org/ Scalasca] ==&lt;br /&gt;
Scalasca is a scalable performance-analysis tool for parallel applications supporting the programming models MPI, OpenMP, SHMEM, and combinations thereof. Its functionality addresses the entire analysis process including instrumentation, parallel post-processing of performance data, and result presentation. It is based on the idea of automatically searching event traces of parallel applications for execution patterns indicating inefficient behavior. The patterns are classified by category and their significance is quantified for every program phase and system resource involved. The results are made available to the user in a flexible graphical user interface, where they can be investigated on varying levels of granularity.&lt;br /&gt;
&lt;br /&gt;
==== Learn about Scalasca ====&lt;br /&gt;
&lt;br /&gt;
* F. Wolf, B. J. N. Wylie, E. Ábrahám, D. Becker, W. Frings, K. Fürlinger, M. Geimer, M.-A. Hermanns, B. Mohr, S. Moore, M. Pfeifer, Z. Szebeny[http://www.fz-juelich.de/jsc/datapool/KojakPubs/hlrs_ptw08.pdf &amp;quot;Usage of the SCALASCA Toolset for Scalable Performance Analysis of Large-Scale Parallel Applications&amp;quot;]  Proc. 2nd HLRS Parallel Tools Workshop, pp. 157-167, Stuttgart, Germany, July 2008. &lt;br /&gt;
* [http://www.scalasca.org/ Visit] Scalasca's website for more information.&lt;br /&gt;
&lt;br /&gt;
== [http://perfsuite.ncsa.uiuc.edu/ PerfSuite] ==&lt;br /&gt;
PerfSuite is a collection of tools, utilities, and libraries for software performance analysis where the primary design goals are ease of use, comprehensibility, interoperability, and simplicity. This software can provide a good &amp;quot;entry point&amp;quot; for more detailed performance analysis and can help point the way towards selecting other tools and/or techniques using more specialized software if necessary (for example, tools/libraries from academic research groups or third-party commercial software).&lt;br /&gt;
&lt;br /&gt;
==== Learn about PerfSuite ====&lt;br /&gt;
&lt;br /&gt;
* [http://perfsuite.ncsa.uiuc.edu/publications/LCI-2005.pdf &amp;quot;PerfSuite: An Accessible, Open Source Performance Analysis Environment for Linux&amp;quot;]. 6th International Conference on Linux Clusters: The HPC Revolution 2005. Chapel Hill, NC. April 2005.&lt;br /&gt;
* [http://perfsuite.ncsa.uiuc.edu/ Visit] PerfSuite's website for more information.&lt;br /&gt;
'''Bold text'''&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Performance_Tools&amp;diff=201</id>
		<title>Performance Tools</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Performance_Tools&amp;diff=201"/>
		<updated>2009-07-14T20:58:40Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== [http://tau.uoregon.edu TAU (Tuning and Analysis Utilities)] ==&lt;br /&gt;
TAU Performance System is a portable profiling and tracing toolkit for performance analysis of parallel programs written in Fortran, C, C++, Java, Python. Applications can be instrumented at the source code level using an automatic instrumentor tool based on the [http://www.cs.uoregon.edu/research/pdt PDT (Program Database Toolkit)], dynamically using [http://www.dyninst.org/ DyninstAPI], at runtime in the Java virtual machine, or manually using the instrumentation API.&lt;br /&gt;
====Learn about TAU====&lt;br /&gt;
* S. Shende and A. D. Malony, [http://www.cs.uoregon.edu/research/paracomp/publ/htbin/bibify.cgi?cmd=show&amp;amp;coll=JOUR&amp;amp;id=ijhpca05.tau&amp;amp;data_present=no &amp;quot;The TAU Parallel Performance System&amp;quot;] International Journal of High Performance Computing Applications, SAGE Publications, 20(2):287-331, Summer 2006&lt;br /&gt;
&lt;br /&gt;
* [http://tau.uoregon.edu/ Visit] TAU's website for more information.&lt;br /&gt;
&lt;br /&gt;
== [http://icl.cs.utk.edu/papi/ PAPI (Performance Application Programming Interface)] ==&lt;br /&gt;
PAPI aims to provide the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events.&lt;br /&gt;
&lt;br /&gt;
==== Learn about PAPI ====&lt;br /&gt;
* Browne, S., Dongarra, J., Garner, N., Ho, G., Mucci, P.[http://icl.cs.utk.edu/publications/pub-papers/2000/papi-journal-final.pdf &amp;quot;A Portable Programming Interface for Performance Evaluation on Modern Processors&amp;quot;] The International Journal of High Performance Computing Applications, Volume 14, number 3, pp. 189-204, Fall 2000.&lt;br /&gt;
* [http://icl.cs.utk.edu/papi/ Visit] PAPI's website for more information.&lt;br /&gt;
&lt;br /&gt;
== [http://www.scalasca.org/ Scalasca] ==&lt;br /&gt;
Scalasca is a scalable performance-analysis tool for parallel applications supporting the programming models MPI, OpenMP, SHMEM, and combinations thereof. Its functionality addresses the entire analysis process including instrumentation, parallel post-processing of performance data, and result presentation. It is based on the idea of automatically searching event traces of parallel applications for execution patterns indicating inefficient behavior. The patterns are classified by category and their significance is quantified for every program phase and system resource involved. The results are made available to the user in a flexible graphical user interface, where they can be investigated on varying levels of granularity.&lt;br /&gt;
&lt;br /&gt;
==== Learn about Scalasca ====&lt;br /&gt;
&lt;br /&gt;
* F. Wolf, B. J. N. Wylie, E. Ábrahám, D. Becker, W. Frings, K. Fürlinger, M. Geimer, M.-A. Hermanns, B. Mohr, S. Moore, M. Pfeifer, Z. Szebeny[http://www.fz-juelich.de/jsc/datapool/KojakPubs/hlrs_ptw08.pdf &amp;quot;Usage of the SCALASCA Toolset for Scalable Performance Analysis of Large-Scale Parallel Applications&amp;quot;]  Proc. 2nd HLRS Parallel Tools Workshop, pp. 157-167, Stuttgart, Germany, July 2008. &lt;br /&gt;
* [http://www.scalasca.org/ Visit] Scalasca's website for more information.&lt;br /&gt;
&lt;br /&gt;
== [http://perfsuite.ncsa.uiuc.edu/ PerfSuite] ==&lt;br /&gt;
PerfSuite is a collection of tools, utilities, and libraries for software performance analysis where the primary design goals are ease of use, comprehensibility, interoperability, and simplicity. This software can provide a good &amp;quot;entry point&amp;quot; for more detailed performance analysis and can help point the way towards selecting other tools and/or techniques using more specialized software if necessary (for example, tools/libraries from academic research groups or third-party commercial software).&lt;br /&gt;
&lt;br /&gt;
==== Learn about PerfSuite ====&lt;br /&gt;
&lt;br /&gt;
* [http://perfsuite.ncsa.uiuc.edu/publications/LCI-2005.pdf &amp;quot;PerfSuite: An Accessible, Open Source Performance Analysis Environment for Linux&amp;quot;]. 6th International Conference on Linux Clusters: The HPC Revolution 2005. Chapel Hill, NC. April 2005.&lt;br /&gt;
* [http://perfsuite.ncsa.uiuc.edu/ Visit] PerfSuite's website for more information.&lt;br /&gt;
'''Bold text'''&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=ENZO&amp;diff=200</id>
		<title>ENZO</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=ENZO&amp;diff=200"/>
		<updated>2009-07-14T20:32:10Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Enzo Version 1.5 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=ENZO Performance Study Summary=&lt;br /&gt;
&lt;br /&gt;
This page shows the performance result from ENZO (svn repository version). We chose this version in part to see the effects of load balancing (not enabled in version 1.5) on scaling performance. The previous performance results for ENZO version 1 are [[EnzoV1Performance | here]].&lt;br /&gt;
&lt;br /&gt;
==Enzo Version 1.5==&lt;br /&gt;
&lt;br /&gt;
Following the release of Enzo 1.5 in November '08 we have done some follow up performance studies. Our initial findings are similar to what we found for version 1.0.1. &lt;br /&gt;
&lt;br /&gt;
The configuration files used were like these:&lt;br /&gt;
&lt;br /&gt;
* [http://nic.uoregon.edu/~scottb/point.inits.large inits]&lt;br /&gt;
* [http://nic.uoregon.edu/~scottb/point.param.large param]&lt;br /&gt;
&lt;br /&gt;
(The grid and particle sizes change between experiments).&lt;br /&gt;
&lt;br /&gt;
This chart shows the scaling behavior of Enzo 1.5 on Kraken:&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoScalingKraken.png]]&lt;br /&gt;
&lt;br /&gt;
Scaling behavior was very similar on Ranger:&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoScalingRanger.png]]&lt;br /&gt;
&lt;br /&gt;
This scaling behavior could be anticipated by looking at the runtime breakdown (mean of 64 processors on Ranger):&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoMeanBreakdown.png]]&lt;br /&gt;
&lt;br /&gt;
With this much time spent in MPI communication, increasing the number of processors allocated to more than 64 is unlikely to result in a much lower total execution time. Looking more closely at MPI_Recv and MPI_Barrier, we see that on average 5.2ms is spent per call in MPI_Recv and 40.4ms in MPI_Barrier. This is much longer than can be explained by communication latencies on Ranger's InfiniBand interconnect. Mostly likely ENZO is experiencing a load imbalance causing some processors to wait for others to enter the MPI_Barrier or MPI_Send.&lt;br /&gt;
&lt;br /&gt;
Next we looked at how enabling load balancing affects performance. This a runtime comparison between non-load balanced (blue) vs. load balanced simulation (red):&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoMeanComp.png]]&lt;br /&gt;
&lt;br /&gt;
Time spent MPI_Barrier decreased but was mostly offset by the increase in time spent in MPI_Recv.&lt;br /&gt;
&lt;br /&gt;
Callpath profiling gives us an idea where most of the costly MPI communications are taking place.&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoCallpathMpiRecv.png]]&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoCallpathMpiBarrier.png]]&lt;br /&gt;
&lt;br /&gt;
MPI Barriers take place in EvolveLevel(). And MPI_Recv takes place in grid::CommunicationSendRegions().&lt;br /&gt;
&lt;br /&gt;
==Snapshot profiles==&lt;br /&gt;
&lt;br /&gt;
Additionally, we used snapshot profiling to get a sense of how ENZO's performance changed over the course of the entire execution. A snapshot was taken at each load balancing step such that each bar represents a single phase of ENZO between two load balancing phases. The first thing to notice is that these phases are regular and short at the beginning of the simulation and become progressively more varied in length with some becoming much longer. &lt;br /&gt;
&lt;br /&gt;
(The time spent before that first load balancing has been removed--mostly initialization)&lt;br /&gt;
&lt;br /&gt;
For MPI_Recv:&lt;br /&gt;
[[Image:EnzoSnapMpiRecvPercent.png|600px]]&lt;br /&gt;
&lt;br /&gt;
For MPI_Barrier:&lt;br /&gt;
[[Image:EnzoSnapMpiBarrierPercent.png|600px]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=ENZO&amp;diff=199</id>
		<title>ENZO</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=ENZO&amp;diff=199"/>
		<updated>2009-07-14T20:30:55Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Enzo Version 1.5 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=ENZO Performance Study Summary=&lt;br /&gt;
&lt;br /&gt;
This page shows the performance result from ENZO (svn repository version). We chose this version in part to see the effects of load balancing (not enabled in version 1.5) on scaling performance. The previous performance results for ENZO version 1 are [[EnzoV1Performance | here]].&lt;br /&gt;
&lt;br /&gt;
==Enzo Version 1.5==&lt;br /&gt;
&lt;br /&gt;
Following the release of Enzo 1.5 in November '08 we have done some follow up performance studies. Our initial findings are similar to what we found for version 1.0.1. &lt;br /&gt;
&lt;br /&gt;
The configuration files used were like these:&lt;br /&gt;
&lt;br /&gt;
* [http://nic.uoregon.edu/~scottb/point.inits.large inits]&lt;br /&gt;
* [http://nic.uoregon.edu/~scottb/point.param.large param]&lt;br /&gt;
&lt;br /&gt;
(The grid and particle sizes change between experiments).&lt;br /&gt;
&lt;br /&gt;
This chart shows the scaling behavior of Enzo 1.5 on Kraken:&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoScalingKraken.png]]&lt;br /&gt;
&lt;br /&gt;
Scaling behavior was very similar on Ranger:&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoScalingRanger.png]]&lt;br /&gt;
&lt;br /&gt;
This scaling behavior could be anticipated by looking at the runtime breakdown (mean of 64 processors on Ranger):&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoMeanBreakdown.png]]&lt;br /&gt;
&lt;br /&gt;
With this much time spent in MPI communication, increasing the number of processors allocated to more than 64 is unlikely to result in a much lower total execution time. Looking more closely at MPI_Recv and MPI_Barrier, we see that on average 5.2ms is spent per call in MPI_Recv and 40.4ms in MPI_Barrier. This is much longer than can be explained by communication latencies on Ranger's InfiniBand interconnect. Mostly likely ENZO is experiencing a load imbalance causing some processors to wait for others to enter the MPI_Barrier or MPI_Send.&lt;br /&gt;
&lt;br /&gt;
Next we looked at how enabling load balancing affects performance. This a runtime comparison between non-load balanced (blue) vs. load balanced simulation (red):&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoMeanComp.png]]&lt;br /&gt;
&lt;br /&gt;
Time spent MPI_Barrier decrease but is mostly offset by the increase in time spent in MPI_Recv.&lt;br /&gt;
&lt;br /&gt;
Callpath profiling gives us an idea where most of the costly MPI communications are taking place.&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoCallpathMpiRecv.png]]&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoCallpathMpiBarrier.png]]&lt;br /&gt;
&lt;br /&gt;
MPI Barriers take place in EvolveLevel(). And MPI_Recv takes place in grid::CommunicationSendRegions().&lt;br /&gt;
&lt;br /&gt;
==Snapshot profiles==&lt;br /&gt;
&lt;br /&gt;
Additionally, we used snapshot profiling to get a sense of how ENZO's performance changed over the course of the entire execution. A snapshot was taken at each load balancing step such that each bar represents a single phase of ENZO between two load balancing phases. The first thing to notice is that these phases are regular and short at the beginning of the simulation and become progressively more varied in length with some becoming much longer. &lt;br /&gt;
&lt;br /&gt;
(The time spent before that first load balancing has been removed--mostly initialization)&lt;br /&gt;
&lt;br /&gt;
For MPI_Recv:&lt;br /&gt;
[[Image:EnzoSnapMpiRecvPercent.png|600px]]&lt;br /&gt;
&lt;br /&gt;
For MPI_Barrier:&lt;br /&gt;
[[Image:EnzoSnapMpiBarrierPercent.png|600px]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=ENZO&amp;diff=198</id>
		<title>ENZO</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=ENZO&amp;diff=198"/>
		<updated>2009-07-14T20:28:29Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Enzo Version 1.5 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=ENZO Performance Study Summary=&lt;br /&gt;
&lt;br /&gt;
This page shows the performance result from ENZO (svn repository version). We chose this version in part to see the effects of load balancing (not enabled in version 1.5) on scaling performance. The previous performance results for ENZO version 1 are [[EnzoV1Performance | here]].&lt;br /&gt;
&lt;br /&gt;
==Enzo Version 1.5==&lt;br /&gt;
&lt;br /&gt;
Following the release of Enzo 1.5 in November '08 we have done some follow up performance studies. Our initial findings are similar to what we found for version 1.0.1. &lt;br /&gt;
&lt;br /&gt;
The configuration files used were like these:&lt;br /&gt;
&lt;br /&gt;
* [http://nic.uoregon.edu/~scottb/point.inits.large inits]&lt;br /&gt;
* [http://nic.uoregon.edu/~scottb/point.param.large param]&lt;br /&gt;
&lt;br /&gt;
(The grid and particle sizes change between experiments).&lt;br /&gt;
&lt;br /&gt;
This chart shows the scaling behavior of Enzo 1.5 on Kraken:&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoScalingKraken.png]]&lt;br /&gt;
&lt;br /&gt;
Scaling behavior was very similar on Ranger:&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoScalingRanger.png]]&lt;br /&gt;
&lt;br /&gt;
This scaling behavior could be anticipated by looking at the runtime breakdown (mean of 64 processors on Ranger):&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoMeanBreakdown.png]]&lt;br /&gt;
&lt;br /&gt;
With this much time spent in MPI communication, increasing the number of processors allocated to more than 64 is unlikely to result in a much lower total execution time. Looking more closely at MPI_Recv and MPI_Barrier we see that on average 5.2ms is spent per call in MPI_Recv and 40.4ms for MPI_Barrier. This is much longer than can be explained by communication latencies on Ranger's InfiniBand interconnect. Mostly likely ENZO is experiencing a load imbalance causing some processors to wait for others to enter the MPI_Barrier or MPI_Send.&lt;br /&gt;
&lt;br /&gt;
Next we looked at how enabling load balancing effects performance. This a runtime comparison between non-load balanced (blue) vs. load balanced simulation (red):&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoMeanComp.png]]&lt;br /&gt;
&lt;br /&gt;
Time spent MPI_Barrier decrease but is mostly offset by the increase in time spent in MPI_Recv.&lt;br /&gt;
&lt;br /&gt;
Callpath profiling gives us an idea where most of the costly MPI communications are taking place.&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoCallpathMpiRecv.png]]&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoCallpathMpiBarrier.png]]&lt;br /&gt;
&lt;br /&gt;
MPI Barriers take place in EvolveLevel(). And MPI_Recv takes place in grid::CommunicationSendRegions().&lt;br /&gt;
&lt;br /&gt;
==Snapshot profiles==&lt;br /&gt;
&lt;br /&gt;
Additionally, we used snapshot profiling to get a sense of how ENZO's performance changed over the course of the entire execution. A snapshot was taken at each load balancing step such that each bar represents a single phase of ENZO between two load balancing phases. The first thing to notice is that these phases are regular and short at the beginning of the simulation and become progressively more varied in length with some becoming much longer. &lt;br /&gt;
&lt;br /&gt;
(The time spent before that first load balancing has been removed--mostly initialization)&lt;br /&gt;
&lt;br /&gt;
For MPI_Recv:&lt;br /&gt;
[[Image:EnzoSnapMpiRecvPercent.png|600px]]&lt;br /&gt;
&lt;br /&gt;
For MPI_Barrier:&lt;br /&gt;
[[Image:EnzoSnapMpiBarrierPercent.png|600px]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=ENZO&amp;diff=197</id>
		<title>ENZO</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=ENZO&amp;diff=197"/>
		<updated>2009-07-14T20:27:40Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* ENZO Performance Study Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=ENZO Performance Study Summary=&lt;br /&gt;
&lt;br /&gt;
This page shows the performance result from ENZO (svn repository version). We chose this version in part to see the effects of load balancing (not enabled in version 1.5) on scaling performance. The previous performance results for ENZO version 1 are [[EnzoV1Performance | here]].&lt;br /&gt;
&lt;br /&gt;
==Enzo Version 1.5==&lt;br /&gt;
&lt;br /&gt;
Following the release of Enzo 1.5 in November '08 we have done some follow up performance studies. Our initial findings are similar to what we found for version 1.0.1. &lt;br /&gt;
&lt;br /&gt;
The configuration files used were like these:&lt;br /&gt;
&lt;br /&gt;
* [http://nic.uoregon.edu/~scottb/point.inits.large inits]&lt;br /&gt;
* [http://nic.uoregon.edu/~scottb/point.param.large param]&lt;br /&gt;
&lt;br /&gt;
(The grid and particle sizes change between experiments).&lt;br /&gt;
&lt;br /&gt;
This chart showing the scaling behavior of Enzo 1.5 on Kraken:&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoScalingKraken.png]]&lt;br /&gt;
&lt;br /&gt;
Scaling behavior was very similar on Ranger:&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoScalingRanger.png]]&lt;br /&gt;
&lt;br /&gt;
This scaling behavior could be anticipated by looking at the runtime breakdown (mean of 64 processors on Ranger):&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoMeanBreakdown.png]]&lt;br /&gt;
&lt;br /&gt;
With this much time spent in MPI communication, increasing the number of processors allocated to more than 64 is unlikely to result in a much lower total execution time. Looking more closely at MPI_Recv and MPI_Barrier we see that on average 5.2ms is spent per call in MPI_Recv and 40.4ms for MPI_Barrier. This is much longer than can be explained by communication latencies on Ranger's InfiniBand interconnect. Mostly likely ENZO is experiencing a load imbalance causing some processors to wait for others to enter the MPI_Barrier or MPI_Send.&lt;br /&gt;
&lt;br /&gt;
Next we looked at how enabling load balancing effects performance. This a runtime comparison between non-load balanced (blue) vs. load balanced simulation (red):&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoMeanComp.png]]&lt;br /&gt;
&lt;br /&gt;
Time spent MPI_Barrier decrease but is mostly offset by the increase in time spent in MPI_Recv.&lt;br /&gt;
&lt;br /&gt;
Callpath profiling gives us an idea where most of the costly MPI communications are taking place.&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoCallpathMpiRecv.png]]&lt;br /&gt;
&lt;br /&gt;
[[Image:EnzoCallpathMpiBarrier.png]]&lt;br /&gt;
&lt;br /&gt;
MPI Barriers take place in EvolveLevel(). And MPI_Recv takes place in grid::CommunicationSendRegions().&lt;br /&gt;
&lt;br /&gt;
==Snapshot profiles==&lt;br /&gt;
&lt;br /&gt;
Additionally, we used snapshot profiling to get a sense of how ENZO's performance changed over the course of the entire execution. A snapshot was taken at each load balancing step such that each bar represents a single phase of ENZO between two load balancing phases. The first thing to notice is that these phases are regular and short at the beginning of the simulation and become progressively more varied in length with some becoming much longer. &lt;br /&gt;
&lt;br /&gt;
(The time spent before that first load balancing has been removed--mostly initialization)&lt;br /&gt;
&lt;br /&gt;
For MPI_Recv:&lt;br /&gt;
[[Image:EnzoSnapMpiRecvPercent.png|600px]]&lt;br /&gt;
&lt;br /&gt;
For MPI_Barrier:&lt;br /&gt;
[[Image:EnzoSnapMpiBarrierPercent.png|600px]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=NEMO3D&amp;diff=196</id>
		<title>NEMO3D</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=NEMO3D&amp;diff=196"/>
		<updated>2009-07-14T20:22:58Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a summary of the performance evaluation of NEMO3D. Our initial focus was on finding hot spots in the code where most of the computational work is being done. In all cases the NEMO3D  benchmark_lanc_thin_no_strain (262144) with recomputed matrices was used. Overhead was calculated on 16 processors.&lt;br /&gt;
&lt;br /&gt;
==Instrumentation overhead==&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|-&lt;br /&gt;
! Run Type &lt;br /&gt;
! Runtime (seconds) &lt;br /&gt;
! Overhead %&lt;br /&gt;
|-&lt;br /&gt;
|Uninstrumented runtime	&lt;br /&gt;
|372&lt;br /&gt;
|NA&lt;br /&gt;
|-&lt;br /&gt;
|Routine+loops instrumented&lt;br /&gt;
|392 &lt;br /&gt;
|5.4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Significant Loops==&lt;br /&gt;
We have found two loops in the source file &amp;quot;h_cvectr_multi.c&amp;quot; that together account for about 90% (with 16 processors) of the runtime of the NEMO3D application. Loop 1 starts at 1235 and ends at 1841. Loop 2 starts at 1270 and ends at 1760. &lt;br /&gt;
===Runtime Breakdown Charts===&lt;br /&gt;
These charts show the runtime breakdown of NEMO3D on processor counts 16,32,64,128,256. Each experiment was run on PSC's SGI Altix system (pople).&lt;br /&gt;
====Legend====&lt;br /&gt;
&amp;lt;font color=black&amp;gt;Entire Experiment&amp;lt;/font&amp;gt; &amp;lt;font color=red&amp;gt;Loop 1&amp;lt;/font&amp;gt; &amp;lt;font color=green&amp;gt;Loop 2&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Nemo_plot_time.png|600px|left]] [[Image:Nemo_plot_fp.png|600px|left]] [[Image:Nemo_plot_dcm.png|600px|left]]&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=People&amp;diff=195</id>
		<title>People</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=People&amp;diff=195"/>
		<updated>2009-07-14T20:19:28Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Allen D. Malony ===&lt;br /&gt;
* Professor, Computer &amp;amp; Information Science, University of Oregon&lt;br /&gt;
&lt;br /&gt;
=== Rick Kufrin ===&lt;br /&gt;
* Senior Research Programmer, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign&lt;br /&gt;
&lt;br /&gt;
=== Shirley V. Moore ===&lt;br /&gt;
* Associate Director of Research and Adjunct Professor Innovative Computing Laboratory, University of Tennessee, Knoxville&lt;br /&gt;
&lt;br /&gt;
=== Nicholas A. Nystrom ===&lt;br /&gt;
* Director, Strategic Applications, Pittsburgh Supercomputing Center&lt;br /&gt;
&lt;br /&gt;
=== Sameer Shende ===&lt;br /&gt;
* Postdoctoral Research Associate, NeuroInformatics Center, University of Oregon&lt;br /&gt;
&lt;br /&gt;
=== Daniel K. Terpstra ===&lt;br /&gt;
* Research Leader, Innovative Computing Laboratory, University of Tennessee, Knoxville&lt;br /&gt;
&lt;br /&gt;
=== Haihang You ===&lt;br /&gt;
* Research Associate, University of Tennessee, Knoxville&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Milestones&amp;diff=194</id>
		<title>Milestones</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Milestones&amp;diff=194"/>
		<updated>2009-07-14T20:13:50Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* PHASE 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Goals that emphasize interaction between institutions are marked in bold.''&lt;br /&gt;
&lt;br /&gt;
==UO==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Add routine-level and message passing AI&lt;br /&gt;
* Develop experimentation framework and user frontend&lt;br /&gt;
* Develop comparative analysis modules&lt;br /&gt;
* Update PerfDMF for regression analysis&lt;br /&gt;
* '''OTF format updates for EPILOG'''&lt;br /&gt;
* '''Evaluate sample-based and direct measurement integration'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Develop controller for parametric experiments&lt;br /&gt;
* Develop user-interface for ACA power tool&lt;br /&gt;
* Build PRA prototype, link with software test harness&lt;br /&gt;
* '''Implement joint measurement support'''&lt;br /&gt;
* '''Update PerfDMF for PerfSuite data'''&lt;br /&gt;
* '''Incorporate PAPI features for multi-core'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Port AI on HPC systems and test&lt;br /&gt;
* Complete AE framework and release&lt;br /&gt;
* Integrate ACA with experimentation system&lt;br /&gt;
* Complete ACA framework and release&lt;br /&gt;
* Integrate PRA with application groups&lt;br /&gt;
* '''Merge automatic profile analysis with KOJAK''' &lt;br /&gt;
&lt;br /&gt;
==UTK==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Define common multi-core events&lt;br /&gt;
* Port Scalasca distributed trace file analysis to cluster environment&lt;br /&gt;
* Develop additional Scalasca patterns for hardware counter profile data&lt;br /&gt;
* '''Automate use of TAU call-path profiles for selective EPILOG tracing'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Implement additional PAPI network components&lt;br /&gt;
* Test distributed trace file analysis on production applications&lt;br /&gt;
* Migrate contextual hardware counter information to PAPI standard&lt;br /&gt;
* '''Incorporate PerfSuite and TAU profiles into Scalasca analysis'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Continue to develop and deploy component PAPI&lt;br /&gt;
* Incorporate events from PAPI components into Scalasca analysis&lt;br /&gt;
* Extend distributed trace analysis to more parallel paradigms&lt;br /&gt;
* '''Integrate low-overhead statistical hardware counter profiling with TAU and PerfSuite'''&lt;br /&gt;
&lt;br /&gt;
==NCSA==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Update core library for current processors/OS&lt;br /&gt;
* Incorporate Perfmon2 in hardware counter library&lt;br /&gt;
* Develop user-oriented reference manual&lt;br /&gt;
* Java API design and development (XML access)&lt;br /&gt;
* '''Begin integration with PerfDMF'''&lt;br /&gt;
* '''Install project software suite at NCSA'''&lt;br /&gt;
* PerfSuite v1.0&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Update core library for current processors/OS&lt;br /&gt;
* Java XML API developed and released&lt;br /&gt;
* Java hardware counter API underway&lt;br /&gt;
* Develop engineering guide&lt;br /&gt;
* '''Integrate PerfDMF access with PerfSuite tools'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Update core library for current processors/OS&lt;br /&gt;
* '''Integration with PerfDMF completed'''&lt;br /&gt;
* '''Joint TAU/PerfSuite automatic analysis tools completed'''&lt;br /&gt;
&lt;br /&gt;
==PSC==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Install current project perrformance toolset at PSC&lt;br /&gt;
* Train consultants in the tools' new, advanced analysis modes&lt;br /&gt;
* Assess performance improvement opportunities of NEMO3D, ENZO, and NAMD groups&lt;br /&gt;
* Coordinate AG Performance Engineering seminars: Best Practices&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Support performance engineering by NEMO3D, ENZO, NAMD, Cactus groups&lt;br /&gt;
* Apply updated tool features, tracking performance gains&lt;br /&gt;
* Coordinate AG Performance Engineering seminars: Tools for multicore&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Inject performance engineering into additional applications&lt;br /&gt;
* Apply updated tool features, tracking performance gains&lt;br /&gt;
* Coordinate AG Performance Engineering seminars: Advanced Tools&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Milestones&amp;diff=193</id>
		<title>Milestones</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Milestones&amp;diff=193"/>
		<updated>2009-07-14T20:13:20Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* PHASE 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Goals that emphasize interaction between institutions are marked in bold.''&lt;br /&gt;
&lt;br /&gt;
==UO==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Add routine-level and message passing AI&lt;br /&gt;
* Develop experimentation framework and user frontend&lt;br /&gt;
* Develop comparative analysis modules&lt;br /&gt;
* Update PerfDMF for regression analysis&lt;br /&gt;
* '''OTF format updates for EPILOG'''&lt;br /&gt;
* '''Evaluate sample-based and direct measurement integration'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Develop controller for parametric experiments&lt;br /&gt;
* Develop user-interface for ACA power tool&lt;br /&gt;
* Build PRA prototype, link with software test harness&lt;br /&gt;
* '''Implement joint measurement support'''&lt;br /&gt;
* '''Update PerfDMF for PerfSuite data'''&lt;br /&gt;
* '''Incorporate PAPI features for multi-core'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Port AI on HPC systems and test&lt;br /&gt;
* Complete AE framework and release&lt;br /&gt;
* Integrate ACA with experimentation system&lt;br /&gt;
* Complete ACA framework and release&lt;br /&gt;
* Integrate PRA with application groups&lt;br /&gt;
* '''Merge automatic profile analysis with KOJAK''' &lt;br /&gt;
&lt;br /&gt;
==UTK==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Define common multi-core events&lt;br /&gt;
* Port Scalasca distributed trace file analysis to cluster environment&lt;br /&gt;
* Develop additional Scalasca patterns for hardware counter profile data&lt;br /&gt;
* '''Automate use of TAU call-path profiles for selective EPILOG tracing'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Implement additional PAPI network components&lt;br /&gt;
* Test distributed trace file analysis on production applications&lt;br /&gt;
* Migrate contextual hardware counter information to PAPI standard&lt;br /&gt;
* '''Incorporate PerfSuite and TAU profiles into EXPERT analysis'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Continue to develop and deploy component PAPI&lt;br /&gt;
* Incorporate events from PAPI components into Scalasca analysis&lt;br /&gt;
* Extend distributed trace analysis to more parallel paradigms&lt;br /&gt;
* '''Integrate low-overhead statistical hardware counter profiling with TAU and PerfSuite'''&lt;br /&gt;
&lt;br /&gt;
==NCSA==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Update core library for current processors/OS&lt;br /&gt;
* Incorporate Perfmon2 in hardware counter library&lt;br /&gt;
* Develop user-oriented reference manual&lt;br /&gt;
* Java API design and development (XML access)&lt;br /&gt;
* '''Begin integration with PerfDMF'''&lt;br /&gt;
* '''Install project software suite at NCSA'''&lt;br /&gt;
* PerfSuite v1.0&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Update core library for current processors/OS&lt;br /&gt;
* Java XML API developed and released&lt;br /&gt;
* Java hardware counter API underway&lt;br /&gt;
* Develop engineering guide&lt;br /&gt;
* '''Integrate PerfDMF access with PerfSuite tools'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Update core library for current processors/OS&lt;br /&gt;
* '''Integration with PerfDMF completed'''&lt;br /&gt;
* '''Joint TAU/PerfSuite automatic analysis tools completed'''&lt;br /&gt;
&lt;br /&gt;
==PSC==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Install current project perrformance toolset at PSC&lt;br /&gt;
* Train consultants in the tools' new, advanced analysis modes&lt;br /&gt;
* Assess performance improvement opportunities of NEMO3D, ENZO, and NAMD groups&lt;br /&gt;
* Coordinate AG Performance Engineering seminars: Best Practices&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Support performance engineering by NEMO3D, ENZO, NAMD, Cactus groups&lt;br /&gt;
* Apply updated tool features, tracking performance gains&lt;br /&gt;
* Coordinate AG Performance Engineering seminars: Tools for multicore&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Inject performance engineering into additional applications&lt;br /&gt;
* Apply updated tool features, tracking performance gains&lt;br /&gt;
* Coordinate AG Performance Engineering seminars: Advanced Tools&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Milestones&amp;diff=192</id>
		<title>Milestones</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Milestones&amp;diff=192"/>
		<updated>2009-07-14T20:12:36Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* PHASE 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''Goals that emphasize interaction between institutions are marked in bold.''&lt;br /&gt;
&lt;br /&gt;
==UO==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Add routine-level and message passing AI&lt;br /&gt;
* Develop experimentation framework and user frontend&lt;br /&gt;
* Develop comparative analysis modules&lt;br /&gt;
* Update PerfDMF for regression analysis&lt;br /&gt;
* '''OTF format updates for EPILOG'''&lt;br /&gt;
* '''Evaluate sample-based and direct measurement integration'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Develop controller for parametric experiments&lt;br /&gt;
* Develop user-interface for ACA power tool&lt;br /&gt;
* Build PRA prototype, link with software test harness&lt;br /&gt;
* '''Implement joint measurement support'''&lt;br /&gt;
* '''Update PerfDMF for PerfSuite data'''&lt;br /&gt;
* '''Incorporate PAPI features for multi-core'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Port AI on HPC systems and test&lt;br /&gt;
* Complete AE framework and release&lt;br /&gt;
* Integrate ACA with experimentation system&lt;br /&gt;
* Complete ACA framework and release&lt;br /&gt;
* Integrate PRA with application groups&lt;br /&gt;
* '''Merge automatic profile analysis with KOJAK''' &lt;br /&gt;
&lt;br /&gt;
==UTK==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Define common multi-core events&lt;br /&gt;
* Port Scalasca distributed trace file analysis to cluster environment&lt;br /&gt;
* Develop additional Scalasca patterns for hardware counter profile data&lt;br /&gt;
* '''Automate use of TAU call-path profiles for selective EPILOG tracing'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Implement additional PAPI network components&lt;br /&gt;
* Test distributed trace file analysis on production applications&lt;br /&gt;
* Migrate contextual hardware counter information to PAPI standard&lt;br /&gt;
* '''Incorporate PerfSuite and TAU profiles into EXPERT analysis'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Continue to develop and deploy component PAPI&lt;br /&gt;
* Incorporate events from PAPI components into KOJAK analysis&lt;br /&gt;
* Extend distributed trace analysis to more parallel paradigms&lt;br /&gt;
* '''Integrate low-overhead statistical hardware counter profiling with TAU and PerfSuite'''&lt;br /&gt;
&lt;br /&gt;
==NCSA==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Update core library for current processors/OS&lt;br /&gt;
* Incorporate Perfmon2 in hardware counter library&lt;br /&gt;
* Develop user-oriented reference manual&lt;br /&gt;
* Java API design and development (XML access)&lt;br /&gt;
* '''Begin integration with PerfDMF'''&lt;br /&gt;
* '''Install project software suite at NCSA'''&lt;br /&gt;
* PerfSuite v1.0&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Update core library for current processors/OS&lt;br /&gt;
* Java XML API developed and released&lt;br /&gt;
* Java hardware counter API underway&lt;br /&gt;
* Develop engineering guide&lt;br /&gt;
* '''Integrate PerfDMF access with PerfSuite tools'''&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Update core library for current processors/OS&lt;br /&gt;
* '''Integration with PerfDMF completed'''&lt;br /&gt;
* '''Joint TAU/PerfSuite automatic analysis tools completed'''&lt;br /&gt;
&lt;br /&gt;
==PSC==&lt;br /&gt;
====PHASE 1====&lt;br /&gt;
* Install current project perrformance toolset at PSC&lt;br /&gt;
* Train consultants in the tools' new, advanced analysis modes&lt;br /&gt;
* Assess performance improvement opportunities of NEMO3D, ENZO, and NAMD groups&lt;br /&gt;
* Coordinate AG Performance Engineering seminars: Best Practices&lt;br /&gt;
&lt;br /&gt;
====PHASE 2====&lt;br /&gt;
* Support performance engineering by NEMO3D, ENZO, NAMD, Cactus groups&lt;br /&gt;
* Apply updated tool features, tracking performance gains&lt;br /&gt;
* Coordinate AG Performance Engineering seminars: Tools for multicore&lt;br /&gt;
&lt;br /&gt;
====PHASE 3====&lt;br /&gt;
* Inject performance engineering into additional applications&lt;br /&gt;
* Apply updated tool features, tracking performance gains&lt;br /&gt;
* Coordinate AG Performance Engineering seminars: Advanced Tools&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=191</id>
		<title>Project Info</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=191"/>
		<updated>2009-07-14T20:07:33Z</updated>

		<summary type="html">&lt;p&gt;Shirley: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Productivity from Open, INtegrated Tools (POINT) project is funded as part of the NSF's [http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5174 Software Development for Cyberinfrastructure (SDCI)] program. The goal of the this project is to integrate, harden, and deploy an open, portable, robust performance tools environment for the NSF-funded high-performance computing centers. We are leveraging the widely-used [http://tau.uoregon.edu TAU], [http://icl.cs.utk.edu/papi/ PAPI], [http://www.scalasca.org/  Scalasca], and [http://perfsuite.ncsa.uiuc.edu/ PerfSuite] technologies as core components, improving them as necessary to meet user and application needs.&lt;br /&gt;
* [[The POINT of Performance|Project News Release]]&lt;br /&gt;
* [[Milestones|Project Milestones]] (members only)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Four major institutions are collaborating in this project: [http://www.uoregon.edu University of Oregon], [http://www.utk.edu University of Tennessee at Knoxville] and [http://www.ncsa.uiuc.edu National Center for Supercomputing Applications] are developing and integrating the performance tools. The [http://psc.edu Pittsburgh Supercomputing Center] is leading the application engagement and outreach effort.&lt;br /&gt;
&lt;br /&gt;
* [[People|Principal Researchers]]&lt;br /&gt;
&lt;br /&gt;
==Supercomputing Conference 2009==&lt;br /&gt;
The POINT team will have several events at this year's SC'09 Conference in Portland, OR. More info will be on our [[News|News page]].&lt;br /&gt;
&lt;br /&gt;
== Contact ==&lt;br /&gt;
We would like to hear from anyone interested in the POINT project, if you have any questions or comments please [mailto:%70%6f%69%6e%74%40%6e%69%63%2e%75%6f%72%65%67%6f%6e%2e%65%64%75 send us an email].&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
	<entry>
		<id>http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=190</id>
		<title>Project Info</title>
		<link rel="alternate" type="text/html" href="http://www.nic.uoregon.edu/mediawiki-point/index.php?title=Project_Info&amp;diff=190"/>
		<updated>2009-07-14T20:01:09Z</updated>

		<summary type="html">&lt;p&gt;Shirley: /* Supercomputing Conference 2009 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Productivity from Open, INtegrated Tools (POINT) project is funded as part of the NSF's [http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5174 Software Development for Cyberinfrastructure (SDCI)] program. The goal of the this project will be to integrate, harden, and deploy an open, portable, robust performance tools environment for NSF-funded high-performance computing centers. We will leverage the widely-used [http://tau.uoregon.edu TAU], [http://icl.cs.utk.edu/papi/ PAPI], [http://icl.cs.utk.edu/kojak/  KOJAK], and [http://perfsuite.ncsa.uiuc.edu/ PerfSuite] technologies as core components, improving them as necessary to meet our stringent support guidelines.&lt;br /&gt;
* [[The POINT of Performance|Project News Release]]&lt;br /&gt;
* [[Milestones|Project Milestones]] (members only)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Four major institutions are collaborating in this project: [http://www.uoregon.edu University of Oregon], [http://www.utk.edu University of Tennessee at Knoxville] and [http://www.ncsa.uiuc.edu National Center for Supercomputing Applications] are developing the performance tools. The [http://psc.edu Pittsburgh Supercomputing Center] is helping to introduce these tools to the HPC community.&lt;br /&gt;
&lt;br /&gt;
* [[People|Principal Researchers]]&lt;br /&gt;
&lt;br /&gt;
==Supercomputing Conference 2009==&lt;br /&gt;
The POINT team has several events at this year's SC'09 Conference in Portland, OR. More info on our [[News|News page]].&lt;br /&gt;
&lt;br /&gt;
== Contact ==&lt;br /&gt;
We would like to hear from anyone interested in the POINT project, if you have any questions or comments please [mailto:%70%6f%69%6e%74%40%6e%69%63%2e%75%6f%72%65%67%6f%6e%2e%65%64%75 send us an email].&lt;/div&gt;</summary>
		<author><name>Shirley</name></author>
		
	</entry>
</feed>