What’s Abandoned Data Costing You?

The average corporate turnover rate for employees is 15.1 percent across all industries, with some specific verticals experiencing as high as 30%. For an organization with 10,000 employees this can account for 1,500 to 3,000 people annually (Compensation Force: 2013 Turnover Rates by Industry).

When an employee leaves an organization the IT department will typically wipe or recycle their hard drive, containing their digital files and email, however, they neglect to clean and manage former employees’ data on corporate networks and servers.

For this scenario, a company of 10,000 looking at the conservative annual turnover of 1,500 employees, this could account for easily 60 TB of data that is abandoned in the data center each year. Over 10 years this explodes to beyond half a petabyte.

Abandoned data is unstructured files, email and other data owned by ex-employees that languishes on networks and servers. Gartner estimates that the 2013 average Annual Storage Cost per Raw TB of capacity is $3,212 (Gartner: IT Key Metrics Data 2014: Key Infrastructure Measures: Storage Analysis: Current Year, Dec. 2013). This can account for millions of wasted expenses each year.

Abandoned data consists of old working documents that have long outlived their business value: revisions of letters, old spreadsheets, presentations and aged email. However, a small percentage of this content can easily contain sensitive files and email. It is this small percentage of contracts, confidential email exchanges,
client records and other similar documents, which adds a level of risk and liability for the corporation.

The bulk of the data is typically what is known as redundant, outdated and trivial content – or ROT – that is simply taking up space and resulting in unnecessary management and data center costs.

The following are data factors to consider…

Keep Reading Download our free whitepaper.

Posted in Uncategorized | Leave a comment

Defensible Deletion Methodology

How to Control Risk and Manage Expenses Associated with User Content

What is Defensible Deletion?

For decades, organizations have had strategies in place to protect and safeguard data. Enforcing those strategies, however, has been one of the greatest challenges faced by corporation over the past two decades. Users hoard data on desktops, including archiving email in local repositories known as pst’s. Every year, storage administrators add massive silos of disk to allow employees to save more and more files, even maintaining space for users that left the company years ago.

Archiving and records managers continually copy important documents and email into proprietary repositories for long term retention. Business continuity and backup administrators replicate all content on a weekly basis and archive data to offsite storage for safekeeping in case of a disaster.

Data is continually replicated, re-stored and hidden throughout an enterprise. Even with sound policies and procedures in place, simply finding data so that it can be managed is an enormous undertaking. Defensible deletion is a process, within an overall information governance policy, that provides comprehensive knowledge and access to all user data so that policy can be applied and data can be managed according to specific compliance requirements.

Implementing a defensible deletion methodology not only mitigates long term risks and liabilities related to enterprise data assets, but also saves time and expense in supporting ongoing litigation and eDiscovery efforts, while reducing data center budget used for storing and managing data that is no longer useful.

Keep Reading Download our free Defensible Deletion whitepaper.

Posted in Uncategorized | Leave a comment

Backing up the House: Why Backup isn’t Archive

When meeting with a data protection manager at a client site recently, they summed up the world of backup in a nutshell stating: “In the past I was told to backup the house, now they want me to know about everything in the house, how many paintings, rugs, chairs, etc. and be able to get them out at a moment’s notice.”

Backup was never designed to provide the level of detail about the data to support today’s data governance requirements. Clients use backup as an archive of their sensitive data, but yet it does not provide the knowledge needed to support legal, eDiscovery, compliance and regulatory needs. How are you going to support an eDiscovery request from a 10-year-old backup tape when you no longer have the backup software that created the tape?

Backup is not archive – but it could be. Backup captures all enterprise data, user files, email, archives and etc. If it exists, backup knows about it. However, as my friend the data protection manager stated, there is no way to know what is contained in backup. Sure you have a catalog, but finding specific emails from a user is not an easy task. Additionally, as the backup data ages it becomes more and more complex to know what you have and get it back from tape or disk.

Extracting knowledge of what is in backup is the first step in leveraging this data for archiving; knowledge well beyond the backup catalog, such as detailed metadata of documents, and spreadsheets, presentations. Beyond metadata, certain data governance requirements require knowledge of content, including keyword search of email and files to find sensitive content.

Security and compliance also requires finding content based on patterns such as PII and PHI. Without this level of knowledge users backup the whole “house” and it becomes an assumptive archive once its disaster recovery role is complete. This results in a “save everything” strategy, which is not a smart or economical governance strategy.

The second step to leveraging backup for archiving is access to information from the proprietary backup formats. Restoring backup data from last week’s tapes or disk images is not very complex, however, finding a specific users mailbox, containing specific keywords is impossible.

So when legal calls and says to find all email related to a specific client, or set of keywords, the backup manager is forced to restore full backups just to find a small set of content. As the backup data ages it becomes even more complex. Companies change backup software, or transition to new backup strategies. Over time getting access to this legacy backup data is very time consuming and expensive, if not impossible.

Leveraging Backup for Archiving

Delivering knowledge of backup data is complex, however, Index Engines not only provides knowledge, including detailed content and keywords, but also provides access. Finding and restoring a specific email from a 10-year-old tape no longer requires the original backup software or a full restore. Index Engines has cracked the code and is able to leverage backup data to support archiving of data for legal and compliance needs.

Organizations have learned that backup is not an archive. Storing old tapes in a salt mine or accumulating backup images on disk will become problematic down the road.

Lack of knowledge and access to the data are not characteristics of a proper archive. Additionally, archiving everything, by storing all backup content, is not a sound strategy for organizations that face frequent lawsuits, regulatory requirements and strict data governance policies. These backup archives will result in risk, liabilities and fines down the road that tarnish the company’s reputation.

Eliminating the proprietary lock that backup software has on data, Index Engines delivers knowledge of what is in backup images and provides intelligent access to the data that has value and should be archived. Finding and archiving data without the need for the software that generated the backup is now possible. This allows backup to be leveraged for archiving and delivers support for today’s growing information governance requirements.

Index Engines supports direct indexing of backup tapes and disk images. Supporting all common backup formats data can be indexed at a high-level of metadata, or down to a full-text content capturing keywords from user email deep within Exchange and Notes databases. Beyond indexing, data can then be restored from backup, maintaining all the important metadata, without the need for the original software.

Two classic use cases of Index Engines technology are to clean up legacy backup data on tape or disk for clients that were using backup as an archive and to stop the need for making tapes from disk-based backups (or to stop archiving recent disaster recovery tapes out to offsite storage).

Index Engines delivers the intelligence and access to these environments to extract what is needed according to policy, which is typically a small volume of the total capacity, and archive it on disk according to retention requirements. Once data is archived and secured it is searchable and accessible to support even the most complex data governance requirements.

Posted in Uncategorized | Leave a comment

Backing Up the House

When meeting with a data protection manager at a client site recently, they summed up the world of backup in a nutshell stating: “In the past I was told to backup the house, now they want me to know about everything in the house, how many paintings, rugs, chairs, etc. and be able to get them out at a moment’s notice.”

Backup was never designed to provide the level of detail about the data to support today’s data governance requirements. Clients use backup as an archive of their sensitive data, but yet it does not provide the knowledge needed to support legal, eDiscovery, compliance and regulatory needs. How are you going to support an eDiscovery request from a 10-year-old backup tape when you no longer have the backup software that created the tape?

Backup is not archive – but it could be. Backup captures all enterprise data, user files, email, archives and etc. If it exists, backup knows about it. However, as my friend the data protection manager stated, there is no way to know what is contained in backup. Sure you have a catalog, but finding specific emails from a user is not an easy task. Additionally, as the backup data ages it becomes more and more complex to know what you have and get it back from tape or disk.

Extracting knowledge of what is in backup is the first step in leveraging this data for archiving; knowledge well beyond the backup catalog, such as detailed metadata of documents, and spreadsheets, presentations. Beyond metadata, certain data governance requirements require knowledge of content, including keyword search of email and files to find sensitive content.

Security and compliance also requires finding content based on patterns such as PII and PHI. Without this level of knowledge users backup the whole “house” and it becomes an assumptive archive once its disaster recovery role is complete. This results in a “save everything” strategy, which is not a smart or economical governance strategy.

The second step to leveraging backup for archiving is access to information from the proprietary backup formats. Restoring backup data from last week’s tapes or disk images is not very complex, however, finding a specific users mailbox, containing specific keywords is impossible.

So when legal calls and says to find all email related to a specific client, or set of keywords, the backup manager is forced to restore full backups just to find a small set of content. As the backup data ages it becomes even more complex. Companies change backup software, or transition to new backup strategies. Over time getting access to this legacy backup data is very time consuming and expensive, if not impossible.

Leveraging Backup for Archiving

Delivering knowledge of backup data is complex, however, Index Engines not only provides knowledge, including detailed content and keywords, but also provides access. Finding and restoring a specific email from a 10-year-old tape no longer requires the original backup software or a full restore. Index Engines has cracked the code and is able to leverage backup data to support archiving of data for legal and compliance needs.

Organizations have learned that backup is not an archive. Storing old tapes in a salt mine or accumulating backup images on disk will become problematic down the road.

Lack of knowledge and access to the data are not characteristics of a proper archive. Additionally, archiving everything, by storing all backup content, is not a sound strategy for organizations that face frequent lawsuits, regulatory requirements and strict data governance policies. These backup archives will result in risk, liabilities and fines down the road that tarnish the company’s reputation.

Eliminating the proprietary lock that backup software has on data, Index Engines delivers knowledge of what is in backup images and provides intelligent access to the data that has value and should be archived. Finding and archiving data without the need for the software that generated the backup is now possible. This allows backup to be leveraged for archiving and delivers support for today’s growing information governance requirements.

Index Engines supports direct indexing of backup tapes and disk images. Supporting all common backup formats data can be indexed at a high-level of metadata, or down to a full-text content capturing keywords from user email deep within Exchange and Notes databases. Beyond indexing, data can then be restored from backup, maintaining all the important metadata, without the need for the original software.

Two classic use cases of Index Engines technology are to clean up legacy backup data on tape or disk for clients that were using backup as an archive and to stop the need for making tapes from disk-based backups (or to stop archiving recent disaster recovery tapes out to offsite storage).

Index Engines delivers the intelligence and access to these environments to extract what is needed according to policy, which is typically a small volume of the total capacity, and archive it on disk according to retention requirements. Once data is archived and secured it is searchable and accessible to support even the most complex data governance requirements.

Posted in Uncategorized | Leave a comment

Index Engines Adds One-Click Data Profiling Reports to Catalyst Express, the Company’s Free 5 TB Enterprise Data Management Software

Catalyst Express gives organizations the ability to automate reports on file content and metadata including location, name, size, extension, dates, duplicates, PII and more

HOLMDEL, NJ – Index Engines has announced the addition of stored reports and automation to Catalyst Express, the information management company’s free user data management software.

These reports allow one-click access to detailed knowlesge of up to 5TB of user data, including aged data, abandoned and active data, duplicates, large files, PII, and more. Reports can be run on demand or scheudled to run as needed.

“Most organizations don’t know what they have, if it has value, if it’s stored in the correct place, if it poses a risk or liability, or if it’s employee vacation photos and music libraries,” Index Engines VP Jim McGann said. “Catalyst will give them this insight into their data and help them determine and execute data policies.”

These canned reports can be used to understand what exists and develop an appropriate disposition strategy, or they can be customized accordance to the users needs.

Customized reports can include file metadata attributes such as path, file name, size, extension, accessed date, modified date, host names, Active Directory group membership, as well as security metadata including read, write, and browse access to files.

Reports including in this new product include::
• Abandoned files, those not accessed in more than 3 years.
• Active files, those accessed or modified within 90 days
• Duplicate content, files with the same document signature
• Large files, files larger than 1GB or 4GB
• Multimedia files, all video, music and image files
• PII, files containing credit card and social security numbers

Index Engines’ Catalyst product line scales to large global enterprise data center environments consisting of petabytes of unstructured data. The new Catalyst Express software is a no-cost entry point that allows clients to leverage the value of the Catalyst platform and begin to control costs and risk associated with unstructured user data.

Leveraging the rich metadata or full-text indexing in conjunction with Active Directory integration and security analysis through indexing of file ACLs, content can be managed with a single click.

High-level reports allow instant insight into enterprise storage providing unprecedented knowledge of data assets so decisions can be made on disposition, governance policies and even data security.

Upgrade options for Catalyst Express include:
• Additional terabytes of capacity
• Advanced data management policies
• Integrated forensic archiving and eDiscovery workflows
• Detailed indexing of file system audit trails
• Metadata and full content indexing of Exchange, Notes, and Sharepoint
• Federated search for distribute environments
• Support for data within backup images (tape or disk)

“Catalyst is implemented worldwide to help manage petabytes of critical business data assets,” McGann said. “With this new product Index Engines is providing a great opportunity to begin managing risk and costs associated with user data at an attractive $0.”

Catalyst Express is available for download at http://www.indexengines.com/catalyst-express
###

Posted in Uncategorized | Leave a comment

5 Things I Found in My Garage that Suggest You Need a Data Center Intervention

When my car could no longer comfortably fit in the garage, I figured it was time to bite the bullet and see exactly what was forcing me to upgrade my garage capacity.

After I pulled everything from the garage out onto my driveway, I stood looking at my collection stuff, I realized I amassed exactly what I warn data center admins about keeping in their data center, stuff of value mixed in with redundant, outdated and trivial junk.

Sensitive documents. First there was a large box sitting out in the open. I remember rummaging through it last February. It has tax documents, pay stubs, doctor receipts, credit card bills and similar financial statements. Sure, it contains tons of my PII, but is it really at risk in my garage?

Of course it is. Most of this could be shredded and I’d never miss my June 2011 American Express bill. The documents I need – W2s, tax returns – easily fit into one folder that can get archived safely into the safety deposit box that I pay the bank for anyway. By organizing this, I can reclaim about six square feet of space and eliminate the risk of my nosey house sitter wandering into my garage and seeing the box labeled “Financial and Tax Records”.

Same thing for the data center, your networks and backup data is likely crawling with PII and PHI issues. Depending on age, industry, company policies; much of that should be remediated. The rest needs to go into a secure archive or encrypted.

Redundant, Outdated, Trivial Data. Then there was a four-shelf rack of stuff that I thought I needed, can’t use right now, but may use again one day: crock pots (two of them), tools, a snow blower, three shovels, old propane tanks and a few boxes of old household stuff.

I could use it. I likely won’t. I definitely don’t need all of it. Toss out the snow blower that doesn’t quite work, retire the boxes of old lamps, radios and other outdated items and relocate the three snow shovels out to the storage shed getting it out of the way and I start making progress. The crockpot came in handy last year and you can never have too many tools, right? Condensed to two shelves.

ROT (redundant, outdated, trivial data) isn’t active data. It’s a mix of junk, outdated files and some things that may need to be kept just in case. If it hasn’t been accessed in the last two or three years, it’s probably safe to move it offline and reclaim some server capacity. (I’m betting on your user share server.)

Active Data. There are some freshly placed bags from the local home improvement store. I have grass seed, some mulch, a few gallons of pool shock and some bath tub sealant. While the best place for it probably isn’t along the passenger side of my car, I need these products today and over the next few weeks.

Active data needs to be managed in place, so it is not lost and I can take advantage of it. Cleaning up all the junk around it makes it easier and allows me to leverage what has value.

Duplicate Data. A few garbage bags and shelves filled with bulk warehouse items: cases of water, toilet paper, canned vegetables, bags of charcoal and laundry detergent.

To me this is value, but when you have 96 of something that isn’t bottled water, it’s a waste of storage budget. Remediate these copies. I’ve seen organizations reclaim 25% of their network capacity just by getting rid of duplicates.

Aged and Former Employee Data. Behind the fourth case of water is a mystery box I haven’t seen in a while. It’s old training and marketing material from a former employment. It was outdated long before I left and is next to some old dry cleaning I haven’t worn in seven years… and will probably never wear again. Next to this are a dozen boxes from my kids room, old books and stuff they will never use. They moved out 5 years ago and have no plan to reclaim this stuff, nor does anyone know it exists.

It happens at data centers too. Employees move around within the organization. Others move on to different companies. Sometimes the data is just outdated and abandoned.

Aged and former employee data can make up to 50% of an organization’s network data. Find out how much of your data either hasn’t been accessed in three years or over two years and is owned by inactive or former employees. My aged and former employee stuff is going in the garbage. Yours may be better off remediated or at least moved offline.

Cleaning up. In one afternoon I was able to clear about over half the contents of my garage. While cleaning up the data center might take a little longer, it is just as simple.

Data profiling technology helps categorize and define user data based on metadata and individual file content so you can make decisions on it. Tier to the cloud. Archive. Remediate. Manage in place. Move offline.

I can even help you get started. Try Catalyst Express, it’s a free download from Index Engines that enables you to understand and manage up to 5TBs of LAN data. Start on a user share server or one used by the sales/services department. Those tend to be hot spots for ROT, ex-employee data and PII.

From there we can help get the rest of your LAN, email and legacy data in order.

As for your garage, you’re on your own.

Posted in Uncategorized | Leave a comment

5 Things I Found in My Garage that Suggest You Need a Data Center Intervention

When my car could no longer comfortably fit in the garage, I figured it was time to bite the bullet and see exactly what was forcing me to upgrade my garage capacity.

After I pulled everything from the garage out onto my driveway, I stood looking at my collection stuff, I realized I amassed exactly what I warn data center admins about keeping in their data center, stuff of value mixed in with redundant, outdated and trivial junk.

Sensitive documents. First there was a large box sitting out in the open. I remember rummaging through it last February. It has tax documents, pay stubs, doctor receipts, credit card bills and similar financial statements. Sure, it contains tons of my PII, but is it really at risk in my garage?

Of course it is. Most of this could be shredded and I’d never miss my June 2011 American Express bill. The documents I need – W2s, tax returns – easily fit into one folder that can get archived safely into the safety deposit box that I pay the bank for anyway. By organizing this, I can reclaim about six square feet of space and eliminate the risk of my nosey house sitter wandering into my garage and seeing the box labeled “Financial and Tax Records”.

Same thing for the data center, your networks and backup data is likely crawling with PII and PHI issues. Depending on age, industry, company policies; much of that should be remediated. The rest needs to go into a secure archive or encrypted.

Redundant, Outdated, Trivial Data. Then there was a four-shelf rack of stuff that I thought I needed, can’t use right now, but may use again one day: crock pots (two of them), tools, a snow blower, three shovels, old propane tanks and a few boxes of old household stuff.

I could use it. I likely won’t. I definitely don’t need all of it. Toss out the snow blower that doesn’t quite work, retire the boxes of old lamps, radios and other outdated items and relocate the three snow shovels out to the storage shed getting it out of the way and I start making progress. The crockpot came in handy last year and you can never have too many tools, right? Condensed to two shelves.

ROT (redundant, outdated, trivial data) isn’t active data. It’s a mix of junk, outdated files and some things that may need to be kept just in case. If it hasn’t been accessed in the last two or three years, it’s probably safe to move it offline and reclaim some server capacity. (I’m betting on your user share server.)

Active Data. There are some freshly placed bags from the local home improvement store. I have grass seed, some mulch, a few gallons of pool shock and some bath tub sealant. While the best place for it probably isn’t along the passenger side of my car, I need these products today and over the next few weeks.

Active data needs to be managed in place, so it is not lost and I can take advantage of it. Cleaning up all the junk around it makes it easier and allows me to leverage what has value.

Duplicate Data. A few garbage bags and shelves filled with bulk warehouse items: cases of water, toilet paper, canned vegetables, bags of charcoal and laundry detergent.

To me this is value, but when you have 96 of something that isn’t bottled water, it’s a waste of storage budget. Remediate these copies. I’ve seen organizations reclaim 25% of their network capacity just by getting rid of duplicates.

Aged and Former Employee Data. Behind the fourth case of water is a mystery box I haven’t seen in a while. It’s old training and marketing material from a former employment. It was outdated long before I left and is next to some old dry cleaning I haven’t worn in seven years… and will probably never wear again. Next to this are a dozen boxes from my kids room, old books and stuff they will never use. They moved out 5 years ago and have no plan to reclaim this stuff, nor does anyone know it exists.

It happens at data centers too. Employees move around within the organization. Others move on to different companies. Sometimes the data is just outdated and abandoned.

Aged and former employee data can make up to 50% of an organization’s network data. Find out how much of your data either hasn’t been accessed in three years or over two years and is owned by inactive or former employees. My aged and former employee stuff is going in the garbage. Yours may be better off remediated or at least moved offline.

Cleaning up. In one afternoon I was able to clear about over half the contents of my garage. While cleaning up the data center might take a little longer, it is just as simple.

Data profiling technology helps categorize and define user data based on metadata and individual file content so you can make decisions on it. Tier to the cloud. Archive. Remediate. Manage in place. Move offline.

I can even help you get started. Try Catalyst Express, it’s a free download from Index Engines that enables you to understand and manage up to 5TBs of LAN data. Start on a user share server or one used by the sales/services department. Those tend to be hot spots for ROT, ex-employee data and PII.

From there we can help get the rest of your LAN, email and legacy data in order.

As for your garage, you’re on your own.

Posted in Uncategorized | Leave a comment

Web Event: Managing Risk and Cost Associated with Legacy Backup Data For Financial Services

Join EMC’s Director of eDiscovery and Compliance, Jim Shook, and Index Engines for this exclusive web event on June 15.

Organizations have amassed significant volumes of legacy data through the archiving of backup tapes. These tapes contain a snapshot of files and email that goes back decades.

Additionally, through all the mergers and acquisitions in the financial services industry, legacy tapes contain records of organizations that no longer exist even though the data lives on.

Data on legacy tapes pose a risk and liability for financial services firms as the data archived and stored in offsite vaults could become the “smoking gun” evidence in a trial or regulatory investigation.

EMC and information management company Index Engines have teamed to provide a solution to managing legacy data and control the inherent risk and costs associated with this sensitive content.

Join us for this important webinar to discover:
• The legal risk associated with data archived on tape,
• Best practices for managing content & controlling risk,
• A case study of a financial services firm and how they conquered this challenge, and
• An analysis of the hidden costs and how they can be contained.

Register at: https://cossprereg.btci.com/prereg/key.process?key=P8B8F8JNR

Posted in Uncategorized | Leave a comment

Index Engines Launches 5 TB Free Product

Index Engines has announced its new Catalyst Express, a no-cost software that provides full content and metadata indexing of up to 5TB of storage containing unstructured user data.

This software-only download creates deep metadata and full-text searchable indexes on documents and emails stored on the file server of their choice that can be used to answer questions like:

– What types and classes of data is being stored where
– When are files being accessed and modified
– Which users can access what files
– Where is the personal, sensitive, non-compliant, regulatory or high-value data
– How much storage is consumed by redundant, old, trivial or stale data

Catalyst Express is tightly integrated with Active Directory, and includes customizable summary reports, dashboards, scheduled system monitoring and workflow automation, plus support for data migration and deletion with defensible audit trails.

“Most organizations don’t know what they have, if it has value, if it’s stored in the correct place, if it poses a risk or liability or if it’s employee vacation photos and music libraries,” Index Engines VP Jim McGann said. “Catalyst will give them this insight into their data and help them determine and execute data policies.”

Index Engines’ Catalyst product line scales to large global enteprirse data center environments consisting of petabytes of unstructured data. The new Catalyst Express software is a no-cost entry point that allows clients to leverage the value of the Catalyst platform and begin to control costs and risk associated with unstructured user data.

Leveraging the rich metadata or full-text indexing in conjunction with Active Directory integration and security analysis through indexing of file ACLs, content can be managed with a single click.

High-level reports allow instant insight into enterprise storage providing unprecedented knowledge of data assets so decisions can be made on disposition, governance policies and even data security.

Upgrade options for Catalyst Express include:

Additional terabytes of capacity
Advanced data management policies
Integrated forensic archiving and eDiscovery workflows
Detailed indexing of file system audit trails
Metadata and full content indexing of Exchange, Notes, and Sharepoint
Federated search for distribute environments
Support for data within backup images (tape or disk)

“Catalyst is implemented worldwide to help manage petabytes of critical business data assets,” McGann said. “With this new product Index Engines is providing a great opportunity to begin managing risk and costs associated with user data at an attractive $0.”

Catalyst Express is available for download at http://www.indexengines.com/catalyst-express-2

Posted in Uncategorized | Leave a comment

Index Engines Collaborates with EMC, Launches Workshop for Legacy Tape Data Management

EMC and Index Engines launch a new workshop that delivers a customized analysis of existing legacy backup data environments to determine the optimal tape to disk mitigation plan that reduces risk and lowers costs associated with maintaining access to legacy backup data.

LAS VEGAS and HOLMDEL, N.J.– Information management company Index Engines has teamed up with EMC to launch a new workshop that delivers organizations an intelligent analysis of their legacy backup tape data and assists in the development of an information governance strategy to migrate sensitive data required by legal and compliance to disk for improved management and access. Index Engines is a Select partner in the EMC Business Partner Program for Technology Connect.

The Workshop for Legacy Tape Data Access Service, delivered through EMC® professional services, leverages Index Engines’ Catalyst software for the ingestion of legacy tape catalogs, metadata analysis and reporting on the content and a disposition strategy that cost-effectively restores data required by legal and compliance to disk allowing tapes to be remediated.

“Through this workshop, EMC can deliver knowledge of legacy tape data along with an intelligent disposition strategy that supports clients’ information governance needs,” said Jim Clancy, Senior Vice President, Global Sales, EMC Data Protection Solutions.

This new workshop provides the data necessary to make an informed decision on migration options given a customer’s actual circumstances and provides solutions for two key pain points including:

– Offering clients who have significant pain around legacy tape data, including costs associated with managing, archiving and restoring data in support of legal, compliance and regulatory requirements a simplified access point and disposition options.

– Providing clients who are managing multiple non-production backup environments a more effective means for validating the optimal method for maintaining access to legacy tape data.

During the workshop, Index Engines’ Catalog Engine will directly ingest TSM, NetBackup or CommVault backup catalogs. The Index Engines solution delivers full reporting and analysis of the content along with direct restoration of files and email without the need for the original backup software.

This EMC-run workshop will provide details back to the customer to enable them to make an informed decision regarding how to mitigate challenges associated with maintaining legacy data access.

Then, EMC will develop metadata level reports on the tape contents along with a disposition strategy recommendation, focusing on migration of valuable data to disk-based archive and retirement of the legacy platform and remediation of tapes.

“Legacy tape content has become a legal and security risk, especially for highly regulated organizations including financial services, healthcare, government, and energy firms,” said Jim McGann, Vice President, Index Engines.

The workshop is available this week through EMC. Please contact your EMC or EMC Partner Sales representative for more details.

About Index Engines

Index Engines provides unprecedented file-level knowledge to manage the growing costs and risks associated with unstructured user data.

EMC is a registered trademark or trademark of EMC Corporation in the United States and/or other countries.

All products mentioned are trademarked by their respective organizations.

Posted in Uncategorized | Leave a comment