Actifio uses the native functionality in Oracle RMAN to ingest data to create a golden copy, update it incrementally with changed blocks, and maintain it in an incremental-forever architecture. Actifio will hold the virtual image in its native format for efficient usage and all Oracle storage formats are supported, including raw volumes, file systems, and ASM.

Once a golden image is captured, this image can be in a ready-state for immediate use, de-duplicated for long term retention, or sent to a remote location for disaster recovery purposes.

Senior AIX Engineer

The Company

At Spectrum, you'll find yourself in the company of some of the industry's smartest and most reliable professionals. And at a company that rewards dedication, values innovation and supports growth.

Thrive in an environment that promotes teamwork and shared success. Build on a foundation of mutual respect. Join the company that understands rewarding careers, with this opportunity:

Job Description

The IBM Power Systems team is looking for an experienced AIX Engineer with advanced OS administration experience, including OS build, hardware deployment and automation. The position is responsible for researching, designing, building, testing, deploying, analyzing, administering, and maintaining IBM Power systems hardware and software infrastructure technology components to meet current and future business goals.

  • Research and testing of new AIX server technologies
  • The design/build and deployment of enterprise AIX computing environments
  • Support of production AIX servers and infrastructure components
  • Collaborate with infrastructure teams and business partners to achieve strong results


  • Senior level knowledge & experience with AIX Midrange systems
  • Experience in configuration, installation and day-to-day operations of AIX server environments
  • Knowledge and experience with IBM Power Systems Virtual IO
  • Working knowledge of TCP/IP protocols
  • Initiative and project planning skills required for longer-term objectives
  • Technically astute in diagnostic methodologies
  • Ability to take direction and guidance to achieve accurate problem isolation and resolution
  • Interacts and collaborates effectively and positively with peers, customers and Management
  • Ability to manage multiple work streams, effectively communicating status and issue escalation
  • Ability to succeed in a fast-paced environment, quickly adapting to change to meet the demands of the business
  • Ability to present ideas in business-friendly and user-friendly language to technical staff, as well as, senior management

Email us to express interest along with your LinkedIn profile and / or CV, so we can look to have a chat.

Grant McKenzie

Four Ways that Backup Software is Not Cutting It

Data is more critical today than ever before.  Recent statistics suggest that 90% of the data in the world has been created in the last two years, and as we look forward, new technologies like Internet of Things will accelerate data growth.  As our dependency on data increases, our need for continuous data access and consistent data protection becomes more critical than ever.

The mainstream backup applications in use today were developed in the late 80’s when hard drive sizes were measured in 100’s of megabytes and Microsoft was shipping Windows 2.11.  As the world has evolved, these applications have bolted-on new features to adapt to exploding data volumes, but the underlying legacy architectures were never designed to deal with the challenges we face today.  

Let’s look at four ways that backup software is not cutting it.

Backup Windows

Historically backup software used an approach of incremental and full backups.  An incremental backup copies the changes from the last backup (full or incremental) so the amount of data transferred is typically very small which results in very fast backups.  A full backup, in contrast, is a complete copy of the production data, and so is typically very large and can become unmanageable as information grows.

In order to minimize backup windows, customers prefer to rely upon incremental backups.  However, this has serious implications on recovery times.  Full backups take much longer to complete, and result in fast recoveries.  We need something better.

Recovery Times

In the past, people focused primarily on minimizing backup windows, but the reality is that recovery is what matters.  Does it even matter if you can backup your data daily, but can never recover it?  The answer is clearly no, and so a key element of any protection solution is how fast you can recover your data.

The recovery challenge goes back to the incremental/full backup model.  Incremental backups are small, but recovery will require the team to first recover the previous full backup and then every subsequent incremental.  This process can be time consuming and can create risk since a failure on any incremental will result in a total recovery failure.  Recovering from a full backup is much simpler since you can recover the entire image from one backup.  

Customers are forced to trade off fast backup times and slow recoveries with incremental versus slow backups and faster recoveries with full backups.  This trade-off is unsustainable in today’s data-centric world.  You need a solution that enables fast backup and fast recovery.

Complex DR Processes

As IT has become more critical to business operations, the challenge of disaster recovery has become more important than ever.  A single outage can result in significant revenue and reputational impact, and so companies must be prepared for outages of all types, including unexpected disasters.

The challenge with traditional backup is that large scale recoveries can take days or even weeks, and as a result, recovering from a disaster can be extremely problematic.  Adding disk as a backup target can provide incremental improvements in recovery times, but the core challenge of lengthy recoveries remains.  It is for this reason that many companies are unable to test disaster recovery plans because the time and effort required is more than they have available.  This is a scary situation because without a test, you cannot be sure if your DR plan will actually work.

We need faster recovery methods that allow for instant data recoveries locally and in the cloud.  These methods should allow data to be presented instantly and run directly from the protection environment without the need of a lengthy restore process.

Inability to Leverage Protected Data

Today’s IT infrastructures are constantly evolving, and in order to ensure reliability and consistency, companies must thoroughly test all proposed changes or upgrades prior to being rolled into production.  In practice, this requires large lab environments where companies must invest large sums of money to create, store, mount and access copies of production data.  This capacity is in addition to any existing investments in production or protection and so represents significant inefficiencies.

In an ideal world, we need a solution that allows for instant read/write enabled data recoveries for testing purposes.  By instantly mounting a copy of production data, lab storage requirements can be significantly reduced and tests can be performed on the latest production copies.  This will simplify lab environments and improve testing quality.

In summary, IT has evolved over the last three decades and data protection has struggled to keep up.  Traditional backup models do not provide the flexibility, performance and scale that today’s businesses demand.  We need to rethink data protection to ensure that it aligns with modern business requirements and that it delivers the availability requirements that we need.