Tool Mentor: TPC – Control, Deploy and Maintain Data
TM134 - How to use TotalStorage Productivity Center for Control, Deploy and Maintain Data
Tool: IBM TotalStorage Productivity Center
Relationships
Main Description

Context

Tool mentors explain how a tool can perform tasks, which are part of ITUP processes and activities. The tasks are listed as Related Elements in the Relationships section.

You can see the details of how processes and activities are supported by this tool mentor, by clicking the links next to the icons:

Details

Today, more customers are looking for ways to organize their corporate information, manage it efficiently and establish service levels for different classes of storage. Productivity Center for Data can assist with all three elements. Productivity Center for Data is designed to provide autonomic computing capacity management through its file-system-extension capability. Through monitoring, Productivity Center for Data detects when a user-defined threshold has been reached, and extend the file system, to help prevent a potential outage.

TotalStorage Productivity Center for Data provides:

  • Automated identification of the storage resources in an infrastructure and analysis of how effectively those resources are being used
  • File-system and file-level evaluation uncovers categories of files that, if deleted or archived, can potentially represent significant reductions in the amount of data that must be stored, backed up and managed.
  • Automated control through policies that are customizable with actions that can include centralized alerting, distributed responsibility and fully automated response.
  • Predict future growth and future at-risk conditions with historical information.

The Basic Problem

Storage needs are rising and the challenge of managing disparate storage systems is growing. Many business environments see a storage growth rate of over 50% per year. Business applications running across multiple operating systems are driving this growth. To provide value to the business these applications must be highly available. The storage resources that these applications depend on are typically spread across storage systems from different vendors each with different costs, performance characteristics and unique management interfaces.

The goal of Data Lifecycle Management is to add or reallocate storage in a way that ensures application availability while matching the cost of storage to the value of data.

Logical Steps To Implementing Data Lifecycle Management

 logical steps to implementing data lifecycle management diagram

How TPC for Data Helps Solve the Problem

IBM® TotalStorage Productivity Center for Data helps in the Planning and Assessment phase of Data Lifecycle Management by allowing users to identify, evaluate, control, and predict your storage assets. TPC can "identify a baseline view of your storage environment and answer questions such as 1) What is your current utilization, 2) Do you have allocated, but unused database space, and 3) Which users are consuming the most space? TPC for Data also helps users "evaluate storage resources by conducting wasted space analysis, file and directory level analysis, and uncovering orphan, obsolete and misused files. TPC for Data provides "control of your storage resources by allowing you to establish centralized alerts, set automated responses, and implement quotas. Finally, you can "predict future growth and at-risk situations and identify the fastest growing users, file systems and database tables. By establishing a base understanding of your storage environment, with an emphasis on discovering areas where simple actions can deliver rapid return on investment. TPC for Data plays a critical role in Data Lifecyle Management.

Methodology for Space Reclamation - A Key Component to "Knowing Your Data"

The storage pyramid was developed by IBM Tivoli® Brand Services, Development, and a large insurance customer. The idea behind the pyramid is to work from a base of all raw storage, then use TPC to determine places where raw storage is lost - either to system overheads like RAID or operating systems, or to operational losses, such as unformatted space, unallocated space, etc. The goal is to optimize and track operational losses through measurement and analysis.

Methodology for Space Reclamation - A Key Component to

  1. Raw Storage Capacity is all of the available space on your storage systems that you can elect to use.
  2. Formatted capacity is the capacity available after ranks have been given RAID types and the ranks have been formatted.
  3. The next tier is the amount of formatted capacity that is assigned to LUNs.
  4. Capacity of assigned LUNs is the sum of all of the LUNs that have been assigned to host computers. The unassigned space here may be done on purpose to allow for file system or database growth on host computers. The size of this reserve pool can be determined by examining the storage utilization trends from each host.
  5. Capacity seen by OS is a measure of the overhead introduced by the OS (and/or filesystem). Some of the losses here are due to OS use, such as FAT tables, etc, the rest could be due to storage assigned to a host but not yet included in it's storage management system (think of hdisks prior to running cfgmgr)
  6. Next, now that storage has been formatted, allocated, put into volumes or partitions, you can measure the actual amount of data stored in those file systems or databases and determine the amount of unused space that's out there.
  7. Finally, you can look a the data stored in your file systems to determine it's business value. Inactive data, orphaned data, data that can be archived, can now be analyzed and managed.

For More Information

For more information about this tool, click on the link for this tool at the top of this page.