Chapter 2 HPSS Planning140 September 2002 HPSS Installation GuideRelease 4.5, Revision 2of sites. Usually, the only time the policy values need to be altered is when there is unusual HPSSsetup.The Location Server itself will give warning when a problem is occurring by posting alarms to SSM.Obtain the information for the Location Server alarms listed in the HPSS Error Manual. To get abetter view of an alarm in its context, view the Location Server's statistics screen.If the Location Server consistently reports a heavy load condition, increase the number of requestthreads and recycle the Location Server. Remember to increase the number of threads on theLocation Server's basic server configuration screen as well. If this doesn't help, consider replicatingthe Location Server on a different machine. Note that a heavy load on the Location Server shouldbe a very rare occurrence.2.11.10 LoggingExcessive logging by the HPSS servers can degrade the overall performance of HPSS. If this is thecase, it may be desirable to limit the message types that are being logged by particular servers. TheLogging Policy can be updated to control which message types are logged. A default Log Policymay be specified to define which messages are logged. Typically, Trace, Security, Accounting,Debug, and Status messages are not logged. Other message types can also be disabled. Once theLogging Policy is updated for one or more HPSS servers, the Log Clients associated with thoseservers must be reinitialized.2.11.11 MPI-IO APIMPI-IO client applications must be aware of HPSS Client API performance on certain kinds of datatransfers (see Section 2.11.7), which is basically that HPSS is optimized for transferring large,contiguous blocks of data.The MPI-IO interface allows for many kinds of transfers that will not perform well over the HPSSfile system. In particular, MPI-IO’s use of file types enables specification of scatter-gatheroperations on files, but when performed as noncollective reads and writes, these discontiguousaccesses will result in suboptimal performance in the best case, and in failure to complete theaccess, if excessive file fragmentation results, in the worst case. Collective I/O operations may beable to minimize or eliminate performance problems that would result from equivalentnoncollective I/O operations by coalescing discontiguous accesses into a contiguous one.An MPI-IO application should make use of HPSS environment variables (see Section 7.1: Client APIConfiguration on page 413) and file hints at open time to secure the best match of HPSS resources toa given task, as described in the HPSS Programmers Reference, Volume 1.HPSS MPI-IO includes an automatic caching facility for files that are opened usingMPI_MODE_UNIQUE_OPEN when each participating client node has a unique view of theopened file. When caching is enabled, performance for small data accesses can be significantlyimproved, provided the application makes reasonable use of locality of references. MPI-IO filecaching is described in the HPSS Programmer’s Reference, Volume 1.