Click here to close now.


OpenStack Journal Authors: Don MacVittie, Elizabeth White, PagerDuty Blog, Pat Romanski, Betty Zakheim

News Feed Item

RackWare Brings Cloud-Based Disaster Recovery for Physical and Virtual Workloads

New RackWare Management Module 3.0 Solution Provides Cost-Effective Protection for the Majority of Workloads in Data Centers

SANTA CLARA, CA -- (Marketwired) -- 07/29/14 -- RackWare, the software provider that integrates data center and cloud resources into a scalable and intelligently managed computing environment, today announced the general availability of RackWare Management Module (RMM) 3.0. Providing a simple and cost-effective solution for enterprise data centers, whether traditional or cloud-based, RMM 3.0 drastically lowers the cost of disaster recovery, allowing enterprises to ensure protection of mission-critical IT resources in case of planned and unplanned outages. With RMM 3.0, enterprises can seamlessly extend existing IT architecture into the cloud for minimal disruption.

RackWare's flexible and automated cloud management solution enables enterprises to easily move their workloads seamlessly between private, public or hybrid cloud environments, while allowing them to expand and contract resources as they are needed. With the announcement of RMM 3.0, RackWare combines its intelligent scaling and migration capabilities with cloud-based disaster recovery, providing enterprises with a complete cloud management solution for business-critical use cases. The newly added capability provides whole-server protection and failover at a fraction of the cost of running a fully replicated data center architected for high-availability or clustering technologies.

Requiring just hours or days to implement and providing maximum availability-to-cost ratio, RMM 3.0 empowers enterprises to execute disaster recovery for the modern data center. The new solution provides an unprecedented level of redundancy, dramatically reducing the headache of purchasing, implementing and maintaining a full replicated disaster recovery site. By utilizing flexible cloud infrastructure, protecting workloads can be done in as little as one hour and testing can be as frequent as needed. The solution brings physical, virtual, and cloud based workloads to the level of protection that mission-critical systems protected by expensive and complex high availability solutions enjoy. Moreover, with disaster recovery that can be used with any cloud infrastructure, RMM 3.0 frees enterprises from vendor lock-in, allowing them to adapt along with their evolving infrastructure needs.

Key new functionality of RMM 3.0 includes:

  • Cloning of production servers - enabling full replication of the operating system, applications and data from production servers into cloud recovery instances;

  • Incremental synchronization - changes in the operating system, applications and data are all synchronized to the recovery instances as the production instances change, with only differences transmitted, saving bandwidth and resources;

  • Cloud-to-Cloud - production and recovery instances span heterogeneous cloud infrastructure or remain in the same infrastructure, across Amazon, RackSpace, CenturyLink, VMware, IBM Softlayer, and OpenStack, among others;

  • Physical-to-Cloud - physical production servers in traditional data centers can be protected by cloned instances into any cloud;

  • Failover - should an outage occur on the production system, the recovery instance is fully synchronized and automatically takes over workload processing;

  • Failback - once the production server is restored, the recovery instance synchronizes all changes that took place during the outage back to the production server for normalized operations;

  • Complete protection - Recovery Time Objective (RTO) and Recovery Point Objectives (RPO) exceed expectations by expanding the scope of disaster recovery to include workloads that are normally under-protected.

RMM 3.0 is generally available now in the United States, Canada, United Kingdom, France, India and Japan. To learn more, please visit for a free demonstration and to purchase the new technology.

Supporting Quotes
"RackWare provides a solid new disaster recovery solution that will allow us to dramatically reduce downtime during an outage with failover capabilities of complex and expensive high-availability solutions, and at a fraction of the cost," said Mark Kocour, associate principal - Global IT for ZS Associates. "Using the new RackWare Management Module 3.0 disaster recovery solution, we are able to easily extend our disaster recovery strategy into the cloud without having to buy additional specialized hardware or duplicate our production systems. This allows us to protect a larger number of workloads across our data center, while increasing their availability."

"The RackWare Management Module (RMM) helps us help our customers," said Steve Paton, vice president of infrastructure for Peer1 Hosting. "We're currently taking advantage of using RMM as a migration tool to help minimize downtime. The technology allows us to seamlessly and easily migrate from any location to bare metal and/or the cloud at a low cost compared to other solutions on the market today. The new disaster recovery component of RMM 3.0 looks fantastic. We're looking forward to deploying that as well and offering our customers full protection of their workloads in the data center."

"The migration and elasticity capabilities of the RackWare Management Module (RMM) technology are incredibly valuable," said Mike Strohl, chief executive officer of Entisys Solutions. "We're especially excited about the workload mobility features of the new RMM disaster recovery solution which allows us to fully protect our customers' servers and data using a variety of different recovery sites. This gives us the unique ability to diversify the risk of downtime in our customers' data center by utilizing a variety of cloud infrastructures."

"Given the dire consequences and high costs of downtime, having an effective disaster recovery system in place is vital for all organizations today," said Jim Rapoza, senior research analyst for Aberdeen Group. "By taking advantage of cloud-based disaster recovery, businesses that haven't implemented DR because of the cost and complexity can now leverage this important capability and avoid the costs of downtime."

"Disaster recovery is an essential element of any business' IT. However, so far enterprises have been limited to either complex, expensive high-availability solutions or tape backup solutions with lengthy restoration times, not ideal for a business that needs to get back up and running quickly after an outage or natural disaster," said Sash Sunkara, chief executive officer and co-founder of RackWare. "With today's launch of RackWare Management Module 3.0, we extend our intelligent automation and policy framework to enable business-critical use cases. Our cost-effective disaster recovery solution allows enterprises to ensure 360 degree protection of the majority of workloads in their data center. We believe that this unprecedented level of flexibility and protection at a low cost will be an industry game changer."


Tweet This:
.@RackWare brings cloud-based #disaster recovery for physical & virtual workloads with new tech launch

About RackWare
RackWare brings intelligence and automation to the cloud, providing greater availability for enterprises, greater flexibility for enterprise IT users and reduced costs for enterprise IT providers. Computing resources -- physical, virtual and cloud machines -- can be easily and automatically scaled up or down as demand fluctuates. On average, RackWare customers realize a cost savings of 40 to 50 percent, while getting the highest performance and availability out of their cloud. RackWare was founded in 2009 and is based in Santa Clara, California. For more information, go to:

Media Contact
Steffi Lau
(774) 678-1086
Kulesa Faul for RackWare
Email Contact

More Stories By Marketwired .

Copyright © 2009 Marketwired. All rights reserved. All the news releases provided by Marketwired are copyrighted. Any forms of copying other than an individual user's personal reference without express written permission is prohibited. Further distribution of these materials is strictly forbidden, including but not limited to, posting, emailing, faxing, archiving in a public database, redistributing via a computer network or in a printed form.

@ThingsExpo Stories
There are over 120 breakout sessions in all, with Keynotes, General Sessions, and Power Panels adding to three days of incredibly rich presentations and content. Join @ThingsExpo conference chair Roger Strukhoff (@IoT2040), June 7-9, 2016 in New York City, for three days of intense 'Internet of Things' discussion and focus, including Big Data's indespensable role in IoT, Smart Grids and Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) IoT's use in Vertical Markets.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi’s VP Business Development and Engineering, explored the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context with p...
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
We all know that data growth is exploding and storage budgets are shrinking. Instead of showing you charts on about how much data there is, in his General Session at 17th Cloud Expo, Scott Cleland, Senior Director of Product Marketing at HGST, showed how to capture all of your data in one place. After you have your data under control, you can then analyze it in one place, saving time and resources.
Just over a week ago I received a long and loud sustained applause for a presentation I delivered at this year’s Cloud Expo in Santa Clara. I was extremely pleased with the turnout and had some very good conversations with many of the attendees. Over the next few days I had many more meaningful conversations and was not only happy with the results but also learned a few new things. Here is everything I learned in those three days distilled into three short points.
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningful and actionable insights. In his session at @ThingsExpo, Paul Turner, Chief Marketing Officer at...
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, rich desktop and tuned mobile experiences can now be created with a single codebase – without compromising functionality, performance or usability. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, demonstrated examples of com...
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, exploreed the current state of IoT connectivity and review key trends and technology requirements that will drive the Internet of Things from hype to reality.
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessions, I wanted to share some of my observations on emerging trends. As cyber security serves as a fou...
Continuous processes around the development and deployment of applications are both impacted by -- and a benefit to -- the Internet of Things trend. To help better understand the relationship between DevOps and a plethora of new end-devices and data please welcome Gary Gruver, consultant, author and a former IT executive who has led many large-scale IT transformation projects, and John Jeremiah, Technology Evangelist at Hewlett Packard Enterprise (HPE), on Twitter at @j_jeremiah. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound effect on the world, and what should we expect to see over the next couple of years.
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now all corporate assets – people, objects, and spaces – can share information about themselves and thei...
PubNub has announced the release of BLOCKS, a set of customizable microservices that give developers a simple way to add code and deploy features for realtime apps.PubNub BLOCKS executes business logic directly on the data streaming through PubNub’s network without splitting it off to an intermediary server controlled by the customer. This revolutionary approach streamlines app development, reduces endpoint-to-endpoint latency, and allows apps to better leverage the enormous scalability of PubNub’s Data Stream Network.
I recently attended and was a speaker at the 4th International Internet of @ThingsExpo at the Santa Clara Convention Center. I also had the opportunity to attend this event last year and I wrote a blog from that show talking about how the “Enterprise Impact of IoT” was a key theme of last year’s show. I was curious to see if the same theme would still resonate 365 days later and what, if any, changes I would see in the content presented.
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk was on IBM Cloudant, Apache CouchDB, and ...
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" in this scenario: microservice A (releases daily) depends on a couple of additions to backend B (re...
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
Container technology is shaping the future of DevOps and it’s also changing the way organizations think about application development. With the rise of mobile applications in the enterprise, businesses are abandoning year-long development cycles and embracing technologies that enable rapid development and continuous deployment of apps. In his session at DevOps Summit, Kurt Collins, Developer Evangelist at, examined how Docker has evolved into a highly effective tool for application delivery by allowing increasingly popular Mobile Backend-as-a-Service (mBaaS) platforms to quickly crea...