Welcome!

FinTech Journal Authors: Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: FinTech Journal, Containers Expo Blog, Cloud Security, @DevOpsSummit

FinTech Journal: Article

Beyond DevOps: Security vs. Speed? | @DevOpsSummit #APM #DevOps

Several problems arise when the harm of software failure cannot be treated as an unbound variable

Fail fast, fail often. Yeah, but the first failure blew up the satellite. Well, this is just a photo-sharing app..not rocket science. Okay, but your photos are accessed by users who have passwords that they probably use for other things..and aren't some photos as important as satellites?

Several problems arise when the harm of software failure cannot be treated as an unbound variable. Here are some thoughts on two. I'll write more on two more (one cognitive, one computational) later.

Problem 1: Identity Persists Across Non-Obviously Coupled Systems (So the Stakes Are Higher Than Your Application)
Worse: security failures cascade well beyond physically contiguous realms (if root then everything) into physically decoupled systems via informational (shared passwords, mailboxes) or physical-but-accidental (power cut then reboot) channels. The brilliant and terrifying Have I been pwned? tool -- to say nothing of the astonishing air-gap-annihilating Stuxnet [pdf] surfaces the obvious but easy-to-forget truisms that simply not having data that should not be accessed by X on the same disk as data that can be accessed by X is not good enough, and that the danger posed by access to one application may be slim compared with the danger posed by access to something more serious via the identity compromised by an in-itself non-dangerous breach.

So even if 'fail fast' is okay for your application, it may not be okay for your users. The result: natural tension between the ideal of continuous delivery -- or even Agile more broadly, or even heavily iterative development in general -- and security.

And while one of the major insights of Agile is that the best refiner is the real world (as opposed to the limited imagination of the planners), one of the major embarrassments of InfoSec is that 95% of security breaches involve human error. For Agile, failure is falling until you can walk. For InfoSec, failure is letting the terrifying cat out of the poorly-designed bag. Post-breach, maybe you've started to salt your hashes (congrats, you're more cryptographically sophisticated than Julius Caesar) but your users' passwords are in the wild.

Problem 2: You Have Actual Human Enemies (So Something Smarter Than Chance Is Trying to Outsmart You)
On sheer randomness, the Internet is getting more dangerous (Akamai records crazy DDoS increases over the past year - 122% for application-level (OSI Layer 7) attacks alone??). But the really scary problem is that real, smart, often well-funded humans are trying to make your software do what you didn't design it to do. For most failures, the enemy is "imprecise requirements" or "poor algorithm design" or "inadequately scalable environment" (or even just 'blundering users'); for security failures, the enemy is malicious engineers.

This is the meatiest bit of the (otherwise slightly theatrical) Rugged Manifesto:

I recognize that my code will be attacked by talented and persistent adversaries who threaten our physical, economic and national security.

Yeah. So engineer.add(<malice, talent, persistence>), return ???? -- and multiply(????, world.get(amountEatenBySoftware) = ????!!!!!

If DevOps is a management practice, then a risk of ????!!!!! is pretty much unacceptable.


None of this, of course, means that Agile isn't an awesome idea. Nor am I suggesting that security can't be baked in to an iterative, continuously improving process - certainly it can, but on the face of it this seems to require a bit of finagling. And of course the proper way to address security will always be risk analysis, with a good lump of threat analysis included in any measure of technical debt.

I'd love to take some taxonomy of software errors (maybe regarding security in particular) and cross-tab cost per error type with cycle time (i.e. length of cycle during which each error that cost d dollars was introduced against cost d), normalizing by estimated technical debt accrued during each cycle (assuming somebody measured that at the time, which probably didn't happen). But maybe someone has done that (definitely seen lots of costs by error but not correlated with cycle time), and (since technical debt is kind of a guess anyway) maybe anecdotes are a better gauge of the security cost of "shift left" anyway.

Anyone have any experiences they'd like to share?

More Stories By John Esposito

John Esposito is Editor-in-Chief at DZone, having recently finished a doctoral program in Classics from the University of North Carolina. In a previous life he was a VBA and Force.com developer, DBA, and network administrator. John enjoys playing piano and looking at diagrams, and raises two cats with his wife, Sarah.

IoT & Smart Cities Stories
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
While the focus and objectives of IoT initiatives are many and diverse, they all share a few common attributes, and one of those is the network. Commonly, that network includes the Internet, over which there isn't any real control for performance and availability. Or is there? The current state of the art for Big Data analytics, as applied to network telemetry, offers new opportunities for improving and assuring operational integrity. In his session at @ThingsExpo, Jim Frey, Vice President of S...
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
LogRocket helps product teams develop better experiences for users by recording videos of user sessions with logs and network data. It identifies UX problems and reveals the root cause of every bug. LogRocket presents impactful errors on a website, and how to reproduce it. With LogRocket, users can replay problems.
Data Theorem is a leading provider of modern application security. Its core mission is to analyze and secure any modern application anytime, anywhere. The Data Theorem Analyzer Engine continuously scans APIs and mobile applications in search of security flaws and data privacy gaps. Data Theorem products help organizations build safer applications that maximize data security and brand protection. The company has detected more than 300 million application eavesdropping incidents and currently secu...
Rafay enables developers to automate the distribution, operations, cross-region scaling and lifecycle management of containerized microservices across public and private clouds, and service provider networks. Rafay's platform is built around foundational elements that together deliver an optimal abstraction layer across disparate infrastructure, making it easy for developers to scale and operate applications across any number of locations or regions. Consumed as a service, Rafay's platform elimi...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...