Refer to the following for this information:. AaBbCc The names of commands, files, and directories; on-screen computer output. AaBbCc What you type, when contrasted with on-screen computer output. AaBbCc Book titles, new words or terms, words to be emphasized.
Replace command-line variables with real names or values. Use ls -a to list all files. To delete a file, type rm filename. This manual uses the following conventions to show alert messages, which are intended to prevent injury to the user or bystanders as well as property damage, and important messages that are useful to the user. Warning — This indicates a hazardous situation that could result in death or serious personal injury potential hazard if the user does not perform the procedure correctly.
Caution — This indicates a hazardous situation that could result in minor or moderate personal injury if the user does not perform the procedure correctly. This signal also indicates that damage to the product or other property may occur if the user does not perform the procedure correctly.
Caution — This indicates that surfaces are hot and might cause personal injury if touched. Avoid contact.
Caution — This indicates that hazardous voltages are present. To reduce the risk of electric shock and danger to personal health, follow the instructions.
Tip — This indicates information that could help the user to use the product more effectively. An alert message in the text consists of a signal indicating an alert level followed by an alert statement. A space of one line precedes and follows an alert statement. Caution — The following tasks regarding this product and the optional products provided from Fujitsu should only be performed by a certified service engineer.
Users must not perform these tasks. Incorrect operation of these tasks may cause malfunction. Caution — This indicates a hazardous situation could result in minor or moderate personal injury if the user does not perform the procedure correctly. The servers are heavy. Two people might be required to carry the chassis and install it in the rack. Warning — Certain tasks in this manual should only be performed by a certified service engineer.
User must not perform these tasks. Incorrect operation of these tasks may cause electric shock, injury, or fire. Caution — Do not make mechanical or electrical modifications to the equipment. Using this product after modifying or reproducing by overhaul may cause unexpected injury or damage to the property of the user or bystanders.
If you have any comments or requests regarding this document, or if you find any unclear statements in the document, please state your points specifically on the form at the following URL. These topics identify key components of the SPARC Enterprise T and T servers, including major boards and internal system cables, as well as front and rear panel features. The following table provides a summary of the circuit boards used in these servers. The following figure shows the layout of the T server front panel, including the power and system locator buttons and the various status and fault LEDs.
Note — The front panel also provides access to internal hard drives, the removable media drive, and the two front USB ports. The ILOM show faulty command provides details about any faults that cause. It also shows the locations of the rear panel LEDs. The following table provides descriptions of the LEDs located on the rear panel. Note — The front panel also provides access to internal hard drives, the removable media drive if equipped , and the two front USB ports.
These topics explain how to use various diagnostic tools to monitor server status and troubleshoot faults in the server. The service processor provides a range of system management and diagnostic tools that enable you to monitor server operations and troubleshoot server problems.
The following is a high-level summary of the various diagnostic tools that are available on the server:. It also provides various commands for investigating system status. For example, when the Solaris software detects a fault, it displays the fault, logs it, and passes information to ILOM where it is logged. Depending on the fault, one or more LEDs might also be illuminated. The shell prompt looks like this:.
The ILOM browser interface supports the same set of features and functions as the shell, but through windows on a browser interface.
Note — Unless indicated otherwise, all examples of interaction with the service processor are depicted with ILOM shell commands. Multiple service processor accounts can be active concurrently. The following flowchart illustrates the complementary relationship of the different diagnostic tools and indicates a default sequence of use. The following table provides brief descriptions of the troubleshooting actions shown in the flowchart. It also provides links to topics with additional information on each diagnostic action.
Run the ILOM show faulty command to check for faults. Flowchart item 4. Flowchart item 5. If the fault displayed included a uuid and sunw-msg-id. The following table provides quick-reference information about the various LEDs. It also points to more detailed descriptions for each. If the Service Required LED is lit, use the show faulty command to obtain additional information about the component affected.
These topics explain how to use ILOM, the service processor firmware, to diagnose faults and verify successful repairs. Therefore, ILOM firmware and software continue to function when the server OS goes offline or when the server is powered off. The service processor can detect when a fault is no longer present. Many environmental faults can automatically recover.
For example, a temporary condition may cause the computer room temperature to rise above the maximum threshold, producing an over temperature fault in the server. The service processor can automatically detect when a FRU is removed. In many cases, it does this even if the FRU is removed while the service processor is not running.
Note — If the service processor does not automatically clear a fault state after the fault is corrected, you must perform these tasks manually. Note — ILOM does not automatically detect hard drive replacement.
The Solaris Predictive Self-Healing technology does not monitor hard drives for faults. As a result, the service processor does not recognize hard drive faults and will not light the fault LEDs on either the chassis or the hard drive itself. Use the Solaris message files to view hard drive faults. For detailed information about ILOM features that are specific to this server, see the.
Before you can run ILOM commands, you must connect to the service processor. You can do this using either of the following methods:. Welcome to ManualMachine. We have sent a verification link to to complete your registration. Log In Sign Up. Forgot password? Enter your email address and check your inbox.
Please check your email for further instructions. Enter a new password. This distribution may include materials developed by third parties. Fujitsu and the Fujitsu logo are registered trademarks of Fujitsu Limited. For Safe Operation This manual contains important information regarding the use and handling of this product. AaBbCc The names of commands, files, and directories; on-screen computer output AaBbCc What you type, when contrasted with on-screen computer output AaBbCc Book titles, new words or terms, words to be emphasized.
Edit your. These are called class options. Prompt Notations The following prompt notations are used in this manual. Alert Messages in the Text An alert message in the text consists of a signal indicating an alert level followed by an alert statement. Notes on Safety Important Alert Messages This manual provides the following important alert signals: Caution — This indicates a hazardous situation could result in minor or moderate personal injury if the user does not perform the procedure correctly.
Task Warning Maintenance Damage The server is heavy. Two persons are required to remove it from the rack. Task Warning Maintenance Electric shock Never attempt to run the server with the covers removed. Hazardous voltage present.
Because 3. The system supplies power to the power distribution board even when the server is powered off. To avoid personal injury or damage to the server, you must disconnect power cords before servicing the power distribution board. The system supplies power to the power supply backplane even when the server is powered off. To avoid personal injury or damage to the server, you must disconnect power cords before servicing the power supply backplane.
For server models with DC input power, do not disconnect the power cable at the Wago connector on the server DC power supply unit.
Instead, turn off the power at the circuit breaker on the power source. Task Warning Maintenance Extremely hot Some components on the motherboard might be hot. Use caution when handling the motherboard, especially near the CPU heat sink. Use care when removing the bus bar screws to avoid touching a heat sink, which can be dangerously hot.
Product Handling Maintenance Warning — Certain tasks in this manual should only be performed by a certified service engineer. Fujitsu Welcomes Your Comments If you have any comments or requests regarding this document, or if you find any unclear statements in the document, please state your points specifically on the form at the following URL.
For More Information. Use of the Web is changing in fundamental ways, driven by Web 2. The character of applications and services is changing too.
Increasingly, user's don't need to install anything, upgrade anything, license anything, subscribe to anything, or even buy anything in order to participate and transact. Web users can even interact directly with content, changing it and improving it. Intellectual property is shared, rather than locked away, and the most popular services are available free of charge.
Even very small transactions are now encouraged, becoming large in aggregate. Social networking and other collaborative sites let like-minded people from around the world share information on enormous range of topics and issues. Business transactions too are now predominantly Web based.
Serving this dynamic and growing space is becoming very challenging for datacenter operations. Services need to be able to start small and scale very rapidly, often doubling capacity every three months even as they remain highly available. Infrastructure must keep up with these enormous scalability demands, without generating additional administrative burden. Unfortunately, most datacenters are already severely constrained by both real estate and power — and energy costs are rising.
There is also a new appreciation for the role that the datacenter plays in reducing energy consumption and pollution. Virtualization has emerged as an extremely important tool as organizations seek to consolidate redundant infrastructure, simplify administration, and leverage under-utilized systems. Security too has never been more important, with increasing price of data loss and corruption. In addressing these challenges, organizations can ill afford proprietary infrastructure that imposes arbitrary limitations.
Very high levels of integration help reduce latency, lower costs, and improve security and reliability. Balanced system design provides support for a wide range of application types — from Web services to high performance computing HPC.
Uniformity of management interfaces and adoption of standards helps reduce administrative costs. Now CMT technology is evolving rapidly to meet the constantly changing demands of a wide range of Web and other applications. Marked by the prevalence of Web services and service-oriented architecture SOA , the emerging Participation Age promises the ability to deliver rich new content and highbandwidth services to larger numbers of users than ever before.
Through this transition, organizations across many industries hope to address larger markets, reduce costs, and gain better insights into their customers.
At the same time, an increasingly broad array of wired and wireless client devices are bringing network computing into the everyday lives of millions of people.
Web scale applications engender a new pace and urgency to infrastructure deployment. Organizations must accelerate time to market and time to service, while delivering scalable high-quality and high-performance applications and services. Many need to be able to start small with the ability to scale very quickly, with new customers and innovative new Web services often implying a doubling of capacity in months rather than years.
At the same time, organizations must reduce their environmental impact by working within the power, cooling, and space available in their current datacenters. Operational costs too are receiving new scrutiny, along with system administrative costs that can account for up to 40 percent of an IT budget.
Simplicity and speed are paramount, giving organizations the ability to respond quickly to dynamic business conditions. Organizations are also striving to eliminate vendor lock-in as they look to preserve previous, current, and future investments.
Coincident with the need to scale services, many datacenters are recognizing the advantages of deploying fewer standard platforms to run a mixture of commercial and technical workloads.
This process involves consolidating under-utilized and often sprawling server infrastructures with effective virtualization solutions that serve to enhance business agility, improve disaster recovery, and reduce operating costs. This focus can help reduce energy costs and break through datacenter capacity constraints by improving the amount of realized performance for each watt of power the datacenter consumes.
As systems are consolidated onto more dense and capable computing infrastructure, demand for datacenter real estate is also reduced. With careful planning, this approach can also improve service uptime and reliability by reducing hardware failures resulting from excess heat load. Servers with high levels of standard reliability, availability, and serviceability RAS are now considered a requirement.
Organizations are increasingly interested in securing all communications with their customers and partners. Encryption is also increasingly important for storage, helping to secure stored and archived data even as it provides a mechanism to detect tampering and data corruption. Unfortunately, the computational costs of increased encryption can increase the burden on already over-taxed computational resources.
Security also needs to take place at line speed, without introducing bottlenecks that can impact the customer experience or slow transactions. Solutions must help to ensure security and privacy for clients and bring business compliance for the organization, all without impacting performance or increasing costs. Addressing these challenges has outstripped the capabilities of traditional processors. Processor manufacturers have long exploited these gains in chip real.
Today these traditional processors employ very high frequencies along with a variety of sophisticated tactics to accelerate a single instruction pipeline, including:. While these techniques have produced faster processors with impressive-sounding multiple-gigahertz frequencies, they have largely resulted in complex, hot, and powerhungry processors that are not well suited to the types of workloads often found in modern datacenters.
In fact, many datacenter workloads are simply unable to take advantage of the hard-won ILP provided by these processors. Applications with high shared memory and high simultaneous user or transaction counts are typically more focused on processing a large number of simultaneous threads thread-level parallelism, TLP rather than running a single thread as quickly as possible ILP. Making matters worse, the majority of ILP in existing applications has already been extracted and further gains promise to be small.
In addition, microprocessor frequency scaling itself has leveled off because of microprocessor power issues. With higher clock speeds, each successive processor generation has seemingly demanded more power than the last, and microprocessor frequency scaling has leveled off in the GHz range as a result. Deploying pipelined Superscalar processors requires more power, limiting this approach by the fundamental ability to cool the processors. To address these issues, many in the microprocessor industry have used the transistor budget provided by Moore's Law to group two or even four conventional processor cores on a single physical die — creating multicore processors or chip multiprocessors, CMP.
The individual processor cores introduced by many CMP designs have no greater performance than previous single-processor chips, and in fact, have been observed to run single-threaded applications more slowly than single-core processor versions.
However, the aggregate chip performance increases since multiple programs or multiple threads can be accommodated in parallel thread level parallelism. Unfortunately, most currently-available or soon to be available chip multiprocessors simply replicate cores from existing single-threaded processor designs.
This approach typically yields only slight improvements in aggregate performance since it ignores key performance issues such as memory speed and hardware thread context switching. Sun engineers were early to recognize the disparity between processor speeds and memory access rates. While processor speeds continue to double every two years, memory speeds have typically doubled only every six years. As a result, memory latency now dominates much application performance, erasing even very impressive gains in clock rates.
0コメント