Friday, December 30, 2011

'Security Through Obscurity' shows lack of maturity



My last post for 2011 will be about a favorite and common topic amongst security professionals: the art and technique of ‘security through obscurity.’  Anyone and everyone in the privacy and security fields knows about this and I am sure that 95% of the readers have knowingly used this approach to protect data and other assets (the other 5% are probably lying).

Simply put, the ‘security through obscurity’ control, if you will, is making some weakness so discreet, subtle or inconspicuous that you are hoping that a user or bad guys does not find the loophole or back door, intentionally or otherwise. I am not talking here about unanticipated ways to defeat your explicit and obvious controls that the developers or programmers could have never contemplated; I am talking about the “ignore that man behind the curtain” ones. Exactly the ones that little Toto sniffing under a curtain uncovers….Like the empty police car on the side of the highway. Or like stating that your password complexity requirements are 9 characters that must consist of 1 lower case letter, 1 number, 1 special character and 1 upper case letter. And then not enforcing the policy.

I got to thinking about this when I recently thought about a line in a Gnarls Barkley song, “Smiley Faces.” The line was “Was knowing your weakness what made you strong?” Now I have asserted in the past that a secret is only a secret if it remains between a minimal amount of people; when the world knows it, it becomes as useful, and valuable as yesterday’s newspaper. And if you would make (most) private data held by governmental institutions and corporations easily available to any one, acquiring it would mean nothing since it cannot be readily misused, like it can be today. In the security context, knowing that the weakness exists in your application/program/website is the strength you need to resolve the fault proactively. And in the future, you can build in ‘privacy by design’ rather than trying to bolt on security after the fact. Always an ugly outcome, both aesthetically and from a user experience.

My point here is that relying on the ‘security through obscurity’ approach, to any degree, for information protection shows an overall lack of sophistication and maturity in your security process and program. I realize that many companies take this approach because it is cheap and fast to deploy – building in proper controls takes time and money. Ultimately, though you will have two choices when you decide to take a path toward security: you can either pay now or pay later. You pay now by making the investment in proper coding controls and preventative measures; you pay later when someone finds the weakness/hole in your program, application or website and posts it on YouTube and then you have re-engineer the code all over again, making double work. In my opinion, paying now lays the groundwork in your organization for both a respect for security and privacy considerations as a corporate value, and for a discipline of doing the right thing right now.

Make a New Year’s resolution then to avoid the temptation of at least one venial sin this year as you think about your security program and policies in 2012– the sin of sloth.

Happy New Year!

Sunday, December 11, 2011

Ignoring Risk management is the riskiest act of all


I always say that everything comes down to risk management. From whether you fly or drive to your vacation spot, to whether you have one more beer at the party, to what stocks you invest in within your 401(K), it all comes down to decisions about risk. Sometimes the decisions are monumental, but mostly they’re insignificant. Most of the time we can ignore, or accept the risks we take on daily with no impact, other times we see the very real repercussions.

If there were ever a poster child for what happens when you blatantly ignore risk management, it would have to be Jon Corzine. The former CEO of MF Global, and former Governor of New Jersey, and former Chairman of Goldman Sachs – whom you would think would understand essentials of risk management as well as anyone on this planet, apparently routinely ignored the pleadings of his Chief Risk Officer about the tenuous position of the firms investment positions.

Tragically, Mr. Corzine not only ignored what his Risk Officer was telling him, he undermined him by complaining to others in the company about the “dour attitude and persistence” (?!?!) of the Risk Officer.

No surprise that the Chief Risk Officer was let go in March of this year.

The act of ignoring risk management as the riskiest possible action is a tautogical overstatement of mythical proportions.  It is true that America's culture, more than any other in the world, forgives failure, tolerates risks, and embraces uncertainty in almost any endeavor. In fact the more brazen the better. Think of the Moon landing, or Evel Knieval.

Yet what is it about a CEO who arguable is a brilliant individual, with undeniable talent, insight and an ability to lead organizations successfully that allowed him to take on risks that were not commensurate with his company’s, or at least his Chief Risk Officer’s risk appetite? Your CRO and General Counsel should be the two people with whom you get full agreement on every significant decision that you as a CEO makes. Undermining your CRO about his warnings on your risky behavior is like telling everyone your cardiologist is a ‘Debbie Downer’ because he diagnosed you with lung cancer.

I think our general nonchalance, or maybe disdain for risk management in general stems from what we as lay people interpret as its accessibility. Everyone has heard or has used the question “What’s the risk?” Yet how many people really under stand true risk management principles? Inherent risk? Residual risk? Really? Do you know what it means? (Ultimately, I blame Parker Bros. for creating the board game, Risk, which we all played as kids. Now everyone thinks they understand, in addition to world domination, ‘risk.’)

You rarely hear people throwing the term “quantum physics” around as cavalierly as we do with the phrase “risk management.” Many of us in the Corporate world think we understand what risk management is like many homeowners think they under electricity or plumbing. Sure, you can change a faucet out or wire a ceiling fan, but would you as untrained homeowner really think that it is worth the risk (the word, again) to rewire the circuit panel that powers your whole house? Most rational individuals don’t think it is worth the tradeoff of saving the $300 it costs to have the electrician come and do the job right, versus the possibility of burning your own house down. A tough sell to the wife under any circumstances.

Just like I don’t expect my dentist to tell me about best practices in privacy, I don’t pretend I know the best way to extract a bicuspid either. So, please, begin to give risk management its due as a genuine discipline practiced by professionals who have different and specialized skills that you don’t have. Don Corleone needed a professional risk manager (Consigliere, Tom Hayden) and so do you, I’ll bet. Don’t go it alone. It’s not worth the risk.

Saturday, December 3, 2011

Ready for its closeup: Privacy in the Board Room


When (and if) you ever think of or hear the term “Board of Directors” you probably envision of panel of crusty, old-timers sitting around a long board room table day-dreaming, doodling, or dozing off while a CEO goes through yet another Death by PowerPoint presentation. If you think those people are there just to enhance their resume and collect their stipend, think again. It’s whole new world for Board members these days.

The visibility and implied responsibility that Board members have in today’s business environment is as substantial as it has ever been. No longer can Board members be asleep at the wheel while the CEO and/or the company explore every whim or hare-brained idea they want. Starting somewhere around the implosion of Enron back in 2001, investors and other interested observers began asking in earnest “Where was the Board in all of this?”

As recently as late 2010, the Board of Hewlett-Packard fired CEO Mark Hurd in a very public way claiming some impropriety with a female contractor and his expense reports. Even during the most recent scandal at Penn State, the media began questioning why the college’s Board of Trustee’s did not raise a red flag or call into question the very questionable actions of a rogue assistant coach. So why has this group of people who had forever been seen by many as rubber stamps now suddenly, and finally, taking on task of ‘guardians of the corporate reputation’?

The Board of Directors or Trustees acts in trust for the shareholders and employees of a company or taxpayers and students in the case of a school. They are tasked with ensuring that integrity of action and quality of product is delivered by the institution that they are with which they are engaged. It is a duty that should not be taken lightly; and appears as though it is taken more seriously now that ever.

Good thing too. In addition to overseeing their respective institutions, one duty that governing boards must address is the various competing priorities of mission, vision, growth and the mundane administrative. One contemporary matter that will be occupying the board’s agenda more and more is that of privacy - privacy of customer’s data, privacy of driver’s location, privacy of users preferences, privacy of subscriber’s habits, and on and on.

Privacy must be a board level topic. Why? Because privacy and its first cousin, security, are not just compliance issues anymore; they are business issues. Business issues that deserve a seat at the table just like innovation, marketing, sales and design have had for years. A company with a core corporate value of privacy has a distinct competitive advantage over one that treats its customer’s privacy cavalierly. Witness two of the year’s highest profile cases of consumer backlash against a company’s apparent disregard of its customer’s privacy: Google’s covert use of collecting Gmail accounts when it rolled out its Social Circles product in May this year, and Facebook censure by the FTC for a host of infractions, all centered around their indifference to user’s privacy. Both companies must now submit to privacy audits for the next 20 years, said the FTC. Facebook took its act of contrition serious enough to go out and hire not one, but two (!) Privacy Officers in response to the action.

As a practitioner of the art, I take it as my responsibility to advance and elevate the issue of privacy all day and every day as far up the chain as I can, and provide visibility to current and pending privacy issues to senior management and ultimately Board if and when they need it. Like so many other topics this year that got their time in the sun (the Arab Spring, WikiLeaks, Occupy Wall Street, to name a few) it is the right time for another, quieter, more discreet but no less revolutionary movement: to finally bring privacy & security from the back room to the board room.

Friday, November 4, 2011

How to improve privacy? How about we abolish it?


A true story that sets up my premise: An article in the New York Times last week written by a man, Hasan Elahi, an Associate professor at the University of Maryland – and an American - who was incorrectly identified by the FBI as someone associated with terrorists sets up an interesting discussion about the value of keeping your private information so private.

The story goes like this: while returning from trip abroad, Mr. Elahi arrived at customs, and was asked to step aside for additional screening. After a significant period of questioning, and unadulterated cooperation by the author, the FBI ultimately realized their mistake. In what I interpreted as a stick in the eye to the Man, the author soon after began a process of documenting with photographs everyplace he had been, every meal he has eaten every, every flight he had taken, every call he made, every store he has visited and every purchase made there, every toilet he used to let them know that he was not up to no good. He began by e-mailing the FBI the photos but then set up his own web site which now ultimately houses 46,000 images of his every movement over the past 6 months. To take it one step further, he has included screen shots of his financial data, phone records and transportation logs - all cross-referenced with the photos on the site so anyone can verify he was where he said he was.

Insane? Possibly? Obsessive? Absolutely. But Elahi goes on to say that anyone who has a social media site that they use on any regular basis does almost the same thing willingly every time they post an update, sends a tweet, checks in, pokes someone, etc. whether they realize it or not.

More interestingly though, Elahi states:

In an era in which everything is archived and tracked, the best way to maintain privacy may be to give it up. Information agencies operate in an industry that values data. Restricted access to information is what makes it valuable. If I cut out the middleman and flood the market with my information, the intelligence the F.B.I. has on me will be of no value. Making my private information public devalues the currency of the information the intelligence gatherers have collected.”

This is an interesting premise: data about you, the real sensitive type, is only valuable to someone else, say, an identity thief, because it is so private and protected – and by inference, difficult for others to authenticate because it rarely sees the light of day. It is valuable to others, because it is valuable to you. (How much sleep do you lose knowing your name, address and phone number is in the yellow pages which has almost no value?)

Keeping non-public data private also prevents some legitimate sources from, for example, reliably validating that the person trying to open a Best Buy instant credit card and purchase a 55-inch high-def flat screen TV is indeed you. Imagine if most of the data that you now protect so dearly (social security number, bank account number, drivers license number) were readily public, and easily available through a Google search. The clerk at Best Buy would simply type in your name into a search engine and a number of sources would corroborate (with a photo) you and all of your data. No identity thief could then be successful without a tremendous amount of effort in trying to impersonate you – and it wouldn’t be easy or worth it.

Thwarting the misuse of private data via identity theft may be as easy as making (most) private data held by governmental institutions so easily available to any one so that acquiring it means nothing since it can not be readily misused, like it can be today. A secret is only a secret when it remains between a minimal amount of people; when the world knows it, it becomes as useful, and valuable as a day-old newspaper.

Now, who wants to go first?

Sunday, October 2, 2011

Maximun ROI on security awareness training? Move from awareness to ownership!


You may be unaware that October is Cybersecurity Awareness month (who knew?), since it is in competition with other major events striving to highlight their relevance as well. (National Apple month, Eye Safety prevention month, Photographer appreciation month, and National Liver Awareness month!)

 Like most of the other campaigns celebrated and promoted during October, Cybersecurity Awareness hopes to promote just that, awareness. Yet, the traditional thinking about employee training on issues like security and privacy, confidentiality, etc., has always been around the same common premise: awareness. Training your staff amounts to basically making them 'aware' of the threats, and as rationale human beings they would avoid such risky behavior by deeming it not in their best interest. Unfortunately the process of simply conveying the threats and risks of certain behavior, by (usually)  transferring the knowledge that the InfoSec team possesses to average employees, hardly constitutes awareness, at least not in the sense that we expect it to be actionable now on the part of the employee.

Though training has been well intentioned over the years, the constant blitz of threats and warnings by security experts have only, in my  opinion, desensitized the average user to the real risks. Think about the old five color-coded threat warning system that Homeland Security wisely abandoned in April of this year. We had the threat level at 'High' (orange) or "Elevated' (yellow) all but once (and for only 14 days),  in the entire nine years that the system was in place. During the 17 times it was raised and lowered back and forth between Orange and Yellow, do you recall ever changing your behavior commensurate with the risk rating? No. Why? Because though you may have absorbed the information IF you happened to be taking a flight during the color change, you assumed that the job of spotting and preventing terroristic activities was largely someone else's. The act of conveying awareness never reached an inflection point. And, again in my opinion, the really effective and efficient way to derive value in your training & awareness campaigns is to move from awareness to ownership.

Consider these two analogies that drive home my point of making ownership of the privacy & security duty to that for all employees and not just the InfoSec team and Privacy Officer. RSA, the eminent security company, was hacked earlier this year by an attacker who may have made off with the crown jewels of the company; an event comparable to Coca-Cola losing its secret formula to a thief. How did it happen? A hacker sent emails to two small groups of employees that included an attachment titled "2011 Recruitment Plan." One employee opened the attachment and inadvertently introduced a virus inside the RSA network which ultimately gave the hacker access to the most sensitive and valued data on the company. And in doing so, enabled later attacks against RSA's customers. Now I am positive that RSA employees have been instructed to the nth degree not to open attachments from people that they don't know, click on links to suspicious web sites, yada yada yada...But apparently this one employee (all it took), must have thought that "security was someone else's job", and "that's why we have anti-virus running on all our machines", and.....you get the idea.

Secondly, consider the act of littering. When you throw trash out of the window on an interstate highway, you rarely consider the implications to you or your immediate surroundings. The effect, if any, on your conscious is fleeting; you keep moving farther away, literally, from the moment and any sense of ownership of the problem or a resolution. ("They have prisoners clean that trash up, don't they ?") However, if you live in a small neighborhood, gated community, enclave, or live in a development with association fees, you suddenly feel the pain of trash and debris more acutely as it encroaches on your residential utopia. Your 'awareness' of the effect of trash in your neighborhood quickly descends into 'ownership' of the problem since you are invested in the outcome more than you are in, say, a clean highway somewhere five states over. Soon you find yourself yelling at neighborhood kids to pick up after themselves...

Like technology itself, hackers and other bad guys have evolved as well. Firewalls and networks have improved to the point of diminishing returns in spending on those devices; the outer defense of the company has been reinforced enough that it is almost impossible to incrementally improve security from, say, adding another moat around the building. The real long-term, sustainable improvement is via the employee.  Humans are long known to be the weakest link in the security chain, and the situation can only be improved through cognizant and mindful behavioral changes. Only through the evolution of the awareness of the problem to ownership of the solution can we even begin to seriously make advancements in the holistic process of teaching employees right from wrong. We may never eliminate litter as a scourge, but we can get them to discover why they, as our employees, should not contribute to it, and make our company's stretch of highway the cleanest on the Interstate.

Sunday, September 11, 2011

9/11 and What We Learned


Despite the unparalleled carnage and inestimable impact on our national economy and psyche, there were a few worthwhile byproducts of 9/11.

First among them was the realization of what America’s reputation and standing was in the Middle East and the rest of the world in the few days and weeks after the attack. (Why do they hate us?) As is the case with most tragedies, it becomes quickly evident who your friends are and aren’t. Every once in a while it helps to take stock of your allies and know where you stand with everyone else.

Instantly after 9/11, the boon to privacy and security professionals become evident, especially business continuity and disaster recovery practitioners. Suddenly, the departments and disciplines that were hidden deep in the bowels of the IT department, that used to be thought of only as cost centers and road blocks to getting access to fun web sites at work, now became the rising stars of the organization. Every CEO and Board of Directors now wanted to know what their company’s plan was if they were to be attacked or lose a data center. How would they stay online? How would they recover services after a terrorist attack? Could they?

The most interesting dividend to arise from 9/11, in my opinion, however, was, the suspension of disbelief in the ‘anything is possible’ scenario. On September 10th 2001 you could not have a credible conversation with anyone whom you tried to convince that you needed to plan for a scenario where a plane might crash into your building or data center affecting your ability to continue your business. With good reason, before 9/11 no one really thought this would ever happen. Historically, when a plane was hijacked, you waited until the hijackers asked that the plane be taken to Havana or Cairo or wherever, and then landed, and then began negotiation with them. No precedent had prepared anyone for the possibility of the hijackers actually taking their own lives in the hijacking. What would that accomplish? How did that advance their interests if they were dead?

Now of course, the approach is much different. No possibility is impossible. No scenario is too far-fetched too imagine or plan for. When I talk to service providers about how they will maintain continuity of business to my company in the event of a disaster, I expect to hear them talk about what they’ll do in the event of everything from a earth quake, hurricane, tsunami, water spout, flash flood, lightening strike, terrorist attack and even a zombie uprising. (Hey, you never know!)

So are we any better off now after 10 years of diligence and ‘saying something if we see something’? Are we safe? Are we safer? Has our alertness kept us out of harm's way from any additional attacks on our soil, or was it just that one small group of lunatics just got lucky while we naturally had our guard down? America has habitually talked itself into one counterfeit panic after another (anyone remember killer bees from South America, SARS, bird flu, or mad cow disease?). The threat from terrorism is unfortunately not one of those red herrings; it is real and it is probably here to stay. Though every tragedy on any scale is regrettable and lamentable, we can always find a lesson or two that comes from it.  At least we can find something that we can possibly benefit from or a lesson to be learned that may not have ever been probable or foreseeable.


Monday, July 25, 2011

Bite the Apple. Just be sure it isn't wax


Though the debt ceiling fiasco may be hogging the headlines today, there was one little story that may have been only an esoteric IT-related ditty, but it is worth retelling here.

If you have ever bought a Louis Vuitton knockoff on the street corner of a big city, or bought a fake Rolex on Craigslist, you usually know it to be the case in advance. Your expectations are muted. The quality of the product, and the cost of the item relative to a real article are always a concession you make for the low price of admission to faux-luxury.

Now, imagine you are in an Inception-like shopping scenario where the products you see for sale on the shelves and wall are indeed genuine, but nothing else around you is. In a little town in China, Kunming, there is apparently an Apple store just like the ones we have here in the U.S., complete with blue shirted staff members, high ceilings and IKEA-like pine woodwork throughout the place. The problem is, Apple has not opened a store in this city yet.   What has occurred, actually, is that an entire Apple store, from floor to ceiling has literally been faked. Though the inventory of Apple products for sale in this store is ostensibly real; even the staff thought that they were really working for Apple! (Reselling Apple merchandise is not a crime, even in the U.S.).

What I find most interesting and relevant to security about this news item is that the level of sophistication of this fraud is, frankly, almost admirable. If you are an American and used to visiting Apple stores, even you may have been challenged to realize that this store is not what it appears. (One sign on the window that said “Apple Stoer” might have given it away for you English majors.) Only now, that this story has become worldwide news, has the Chinese authorities stepped in to shut down the phony establishment.

But say you had only a smattering of English understanding, only knew the Apple brand by the iconic white apple logo, or never really pay attention to detail, you would be hard-pressed in deciphering that this place was bogus. My point here is that if we can barely detect a full-blown store front with all the trappings as being fake, how can your average internet user be expected to know when to not click or an e-mail or go to an unfamiliar and dangerous website? If people can be easily deluded by a ruse such as the re-creation of an entire store, who among us can be sure that we’d never be so stupid as to input our credit card number or social security number in a elaborate and almost perfectly-crafted website that looks exactly like the bank website we’re used to seeing every time we bank online? Unless you know what you are looking for, you can’t.

We all know people who are afraid to bank online or engage in e-commerce for fear of being bamboozled by bogus phishing sites. Imagine some one in the Chinese town of Kunming saying something to the effect of “I’m afraid to buy a MacBook Air online, so I just go down to my local Apple store and buy it in person. That way I’ll be safe!”


Though the owner of the doppelganger Apple store may not have necessarily had deception as his primary motive as he was deceiving everyone from his landlord to his blue-shirted Genius bar staff members, the incident itself is telling on many levels. Chief among my points here is that fraud is occurring on such an increasingly sophisticated level, that it is almost incomprehensible to ponder how the good guys can begin to catch up, let alone wholesale stop it. If someone will go to such lengths and efforts to recreate the bricks and mortars of an entire store in almost every dimension in the real world, imagine what chicanery is already happening in the online world, and worse, what the future holds for us! If not for the second-rate sign painter who didn’t have spell check available when he was painting “Apple Stoer,” we would never have been talking about this. It reminds me of the greatest line in the movie ‘The Usual Suspects:’ The greatest trick the Devil ever pulled was convincing the world he didn't exist.



Sunday, June 5, 2011

Corollary Risks & Unintended Consequences

The global nuclear watchdog agency, the IAEA, said last week that the Japanese government was remiss in their risk assessment duties by not failing to fully anticipate what dangers a giant tsunami might pose to a nuclear reactor in that country. In fact the head of the IAEA, Michael Weightman, actually said that he could not understand how a country that has excelled in the prediction of earthquakes could have failed so spectacularly in predicting a giant tsunami. He went on to say that "Perhaps, their methodologies or data didn't allow them to predict that this size of tsunami could occur."

Huh?

I am under the impression, and operate as such, that in the aftermath of  9/11's lesson, no risk scenario is too remote or unlikely to reasonably plan for and reasonably anticipate. How is possible that Tokyo could not or did not fail to see the corollary between a large earthquake - which Japan undergoes with regular frequency - and the quite likely consequence of a tsunami. Japan is, after all, an island nation that is surrounded by water, so tsunamis would be one of the most likely threats to consider planning for. The city of Topeka, Kansas can be excused for not having a tsunami response plan, but not any city in Japan.

If you plan a beer garden event, you better have a corollary plan to address the risks of full bladders; if you plan a vacation to London, you better plan for rain; and if you plan to buy a Bugatti Veyron Super Sport ($2.7 million), a car that has 16 cylinders, has 1001 horsepower and gets only 8 miles to the gallon in the city, you had better be prepared for the consequences of higher fuel bills, higher car insurance and significantly less disposable income for other luxuries (Four new wheels and tires $50,000; Annual routine maintenance $20,000).

With the Bugatti's top speed at about 253 miles per hour, need we even broach the subject of the increased risk of dying in a crash?

Wednesday, May 4, 2011

For Privacy & Security, when Technology and Intelligence compete...it's no contest

With the recent news of the capture and death of Osama Bin laden, one thing was evident and overwhelmingly clear: our brilliant and sophisticated technological superiority notwithstanding, at the end of the day it was pure, simple human intelligence that produced the dramatic results.

Take away: though technophiles like me love to layer on security tools and controls to maintain data and privacy security throughout the organization, it is the simple sentence and/or concept that hits home to the end user employee who sits on the frontline of the trench warfare between customer confidence and blaring headlines that it is he and she who really determine our long-term success.

Being able to translate the importance and criticality of security being 'everyone's job' (and not just InfoSec's) within the company, is the single most valuable ROI of security & privacy awareness a company can realize. Forget DLP, NAC, anti-virus, encryption, etc. translating 'intelligence' into accessible and actionable steps your employees can take to protect the company's 'crown jewels' will ultimately be the reward your business folks will be looking for, appreciate, and best of all, value.

Sunday, May 1, 2011

Bring Your Own Device to Work and Help Put the IT Department Out of Work?!?

I was a having a conversation with another fellow security professional at the CSO Perspectives seminar a few weeks ago and he used the word “disintermediation” to make a point about his website. We had a bit of a chuckle about how that word that was used (rather, overused) during the dot-com days. The context back then was that the new, online world was going to obsolesce the traditional world of bricks-n-mortars through the ‘disintermediation’ process of cutting out the no-value-adding, costly infrastructure of middle-men.

This got me to thinking about the topic I was speaking about at the conference: the way to bring about a culturally acceptable balance between security and the use of consumerized IT. That is, how could IT departments allow users to bring and use their own equipment in the work environment and still maintain a modicum of security and privacy?

Why is this issue even a concern? In this cost-conscious environment where businesses are constantly being pressured to reduce expenses as much as possible, doesn’t consumerized IT actually make sense?

In some ways, yes. The primary downside of this veritable technological tsunami is the impact it has had on the dynamic between the typical user and the IT department. The user demand (especially among C-level types) of bringing in a new iPad, iPhone, Droid, Xoom, etc. that they got for Christmas and expecting it to be hooked up to the company network, inevitably highlights the tension and traditional IT resistance of allowing unknown/untrusted devices into the inner sanctum. The risks are obvious and myriad. These risks have led many organizations to firmly resist consumerization by restricting personal devices/consumer electronics into the workplace.

I argue that regardless of the formal or informal position of the IT department, or even the company policy in general, this faction of users is growing and is in fact disintermediating the IT department by working around them to get their devices to work at work. The ‘Just Say No’ position of many IT departments is in fact making the company less secure overall as it is causing employees to circumvent the rules blockades put up and kept in place from years past. 

The driver of this form of insubordination is clear: these days, the boundaries of a company’s information network are not as clearly defined as they were in the recent past  - the mobile phone is now the mobile office, for example. The ultimate objective of consumerization is simply work and personal life converged onto a single device. There is no longer credibility in walking around with five devices clipped to your belt, looking like something out of Batman Beyond. Today, if you walk into a meeting and plop down more than one device on the table, you are immediately branded a dinosaur.

The primary theme of my speech was that that the trend of consumerized IT is irreversible and futile to resist, so CIO/CISO/CTOs need to seek a culturally acceptable middle-way of accommodating the movement, while still setting reasonable guidelines.  The benefits of cooperation with a workforce who is more tech-savvy than ever are numerous, not the least being the reputation of IT as supporter of the business will be greatly enhanced. No longer IT will be identified as the “Dept. of No.”

Here are few more reasons why it makes sense to listen to the sound of inevitability that’s coming at us at 100 mph. It’s all about productivity via familiarity of the toolset. Think about how life was like 15 years ago: you had use of all the great technology and software at work. When you came home, all you had was some stripped down versions of that machinery and applications – toys, really.  Today, the scenario is reversed. Employees who have state-of-the art technology at home can’t reconcile the fact that when they come to work they have a Windows XP, or worse, Windows 98, machine that takes 2 days to boot up.  Pent-up user demand (I want my MTV!), especially of the Gen X & Y and Millennials should not be underestimated,  and consumerized IT can be the Holy Grail of employee satisfaction.

The toothpaste is now out of the tube, folks.  Employees are a lot more productive when they have a say on the tools they use every day. What we as IT professionals need to do is to show leadership & get it right so that the company is protected & users are happy. At least for now.







Tuesday, April 5, 2011

Fare thee well, Epsilon…A future case study for brand & reputation risk

Like me, maybe you have received a notice in the last few days from one of many institutions that were affected by a major data breach of Epsilon, an online marketing firm. So far, we are told, mostly e-mail addresses were compromised, but in some cases so were customer names. You might not think this sounds terribly alarming, unlike, say, the T.J. Maxx episode in 2007 that included the loss of 45 million debit and credit card numbers. But you would be wrong

In the T.J. Maxx scenario, only the reputation and brand of T.J. Maxx was impacted. In this case, Epsilon is the service provider to a significant list of top-tier financial institutions including Barclays Bank, U.S. Bancorp, Walt Disney, Marriott, Ritz-Carlton, Best Buy, L. L. Bean, Home Shopping Network, TiVo and Target. The ongoing concern is that customers of these institutions can now be specifically targeted for fraudulent e-mail threats know as ‘spear phishing.’ (Though notice of the breach was sent to me by e-mail, oddly enough

In the T.J. Maxx case, the credit cards and debit cards were quickly canceled and replaced by the issuers (Visa, Mastercard, etc.). And in most cases these days, unlike in the recent past, the customer is not even responsible for the first $50 of fraudulent charges (Bank of America tells me that I will not be responsible for any fraudulent charges!). This lack of material and financial impact on a customer of T.J. Maxx helps explain why after their breach, not only did the sales of the company continue as before, but their stock price suffered no long-term ill effect. Average customers liked what the stores offered in terms of fashions and prices and disassociated the breach itself from the stores and the merchandise.

In the Epsilon case, however, I fear the result will be much more disastrous for them. The publicity around this episode alone is more significant than most other ones like it. Rush Limbaugh actually used the Epsilon example today to sell one of the identity theft products he touts on his show. The actual service offered by Epsilon can easily be replaced, but the untarnished reputation of the brand whose customer falls prey to a fraudulent e-mail cannot so easily be restored. If my identity is stolen after I click on a fake e-mail from my bank, I am going to remember and negatively associate the experience with the bank, not the e-mail marketing vendor who didn’t encrypt my e-mail address and name in their database.

We are not sure yet just how lax Epsilon was in their security controls that led to this incident. Whether or not they were as lax as T.J. Maxx was, will be uncovered in brutal detail in the process over the next few weeks, especially in the security world. Security folks will be using this very case as a way to reiterate the internal message of due care and the need for this or that software or hardware to help protect their own shop from suffering a similar fate.

This unfortunate series of events highlights the kind of brand and reputation risk a firm can suffer when outsourcing even the most seemingly innocuous service. Proper vendor management and due diligence of service providers will be the talk of the town over the next couple of months. Your clients will be asking what and how you do it in your shop, without a doubt. So be ready with a solid response.

Monday, March 28, 2011

Things Worth Fighting For


I came across a little publicized story this week that presents an interesting parallel to my constant message of privacy & security diligence. Here is the story: The Yamaha Motor Manufacturing Corporation has been making an all-terrain vehicle (ATV) in the U.S. called the ‘Rhino’ since 2003. The Rhino is different than its single-passenger predecessor since it allows for two passengers to sit side-by-side.

Four years later, the company added a few safety updates like more passenger hand-holds. Lawyers for some injured drivers (plaintiffs) jumped on the company’s move insisting that the reason the safety features were added was because the vehicles were not safe in the first place. Naturally, lawsuits piled in. Overwhelming a company with so many lawsuits that it figures it’s easier to settle then fight was the approach the plaintiff’s attorneys took. The attorneys attacking the company even petitioned the Consumer Product Safety Commission (CPSC) to aid their suits by trying to force Yamaha to recall their vehicles.

Yamaha did not feel a recall was warranted and even worked with the Consumer Product Safety Commission to make other modest safety changes that would satisfy the agency.

Most importantly, the company responded to the litany of lawsuits in an uncommon way: It decided to fight back.

The company was ultimately vindicated as it proved that in a significant number of instances, the drivers of the vehicles were grossly at fault due to their own behavior. Though riders are cautioned to operate the vehicle properly, the CPSC investigations indicated that product defects, insufficient warnings, negligence, etc., was not the cause of the injuries.

What’s the takeaway then? The company believed in its product, it believed it had provided sufficient safety and precautionary advice to its customers to operate safely, and it had decided to stand its ground and fight back on a principle of having done the right thing. (How unorthodox!)

And what is the connection to privacy & security? Companies create and publish rules and guidelines all the time for their employees on why and how it expects the employees to follow those policies. Some times the rules aren’t followed. Often, the rules are only words in a document on the company Intranet to make Legal or HR happy. Sometimes the Information Security team is only a paper tiger with little enforcement power or ability to bring about change and assure compliance.

But in some cases, the company itself, usually with the tone set at the top, decides to practice what it preaches and enforce the rule; make examples of those who purposely attempt the flout the rules, and inform those who do it unwittingly.

These days, consumers are savvier than ever about information. They know the value of their information and they want it protected. A customer will walk away from a company who only pays lip service to the principles of privacy & security, and they will excoriate the company online in blogs and forums for doing so.

The twin pillars of privacy & security in a company can easily be an asset and competitive advantage to a company who knows how to leverage that expertise, and maintain its diligence. I know it’s not always easy to keep up the pressure. Employees get comfortable; employees get lazy. IT can sometimes be a hindrance and not a help to getting the business of the company done, so creative employees will go around the roadblocks to meet deadlines. Privacy & security sometimes suffers.  When a company becomes lax, or inertia sets in, the guard gets let down and rules are no longer followed or enforced. That’s when incidents happen; that’s when headlines happen.

If a company believes in its principles, believes it has provided reasonably sufficient safety and precautionary advice to its employees to treat and handle information securely, and it decides to stand its ground and fight back against the perpetual inertia of letting violations slide by because its easier than making a fuss, then it has done the right thing. It will fight back and should fight back. Why? Because privacy & security is worth fighting for.









Thursday, February 3, 2011

What Does Stuxnet and Rollerball Have in Common? Only The Future of Warfare...


We have seen the future of war, and its name is Stuxnet.

When I was a kid, one of my favorite movies was a science fiction picture that proposed the idea that in the future, nations would no longer exist and war would no longer exist. The world would be controlled by a handful of international corporations.  The controlling industrialists realized the folly of war with its destruction, its carnage, its irrelevance, and resorted instead to a particularly gruesome sport as a proxy for war itself: Rollerball. Primary cities each had their own teams and the teams would battle it out on the hardwood coliseum for supremacy. The movies tagline is: "In the not too distant future, wars will no longer exist. But there will be Rollerball." (Rollerball is like a cross between roller derby, hockey and motocross.)

The original version of the movie (1975) is a bit dated and contrived , but Rollerball does contemplate a future that, in retrospect now seems pretty plausible and a good security allegory.

The worst-case scenario of all-out nuclear war looks unlikely to occur due to a variety of reasons; not the least of which is the overwhelming destruction and the obvious repercussions on the instigator.  What is much more likely based upon recent evidence is that States and private industry  will increasingly engage in proxy fights through esoteric non-State actors. Numerous examples of these proxy fights exist which include cyber-warfare between entities where the target was obvious, but the attacker was not.  In 2007, a three-week wave of massive cyber-attacks were aimed at the small Baltic country of Estonia, where Parliament, banks, and the media were targeted, allegedly by Russian hackers after the Estonians' removal of a Soviet war memorial in the center of the capital, Tallin. In late 2010, companies like Visa, MasterCard, PayPal and Amazon.com were also targets of coordinated distributed denial-of-service attacks, designed to force the websites offline or make them generally unavailable for business by hacker sympathizers of Julian Assange due to the websites' refusal to process payments to support the Wikileaks effort.

To best illustrate the premise that future conventional warfare for most of the advanced world will pose a lesser risk than it has historically, and will instead be replaced by pure cyber-warfare, consider the case of Stuxnet.

'Stuxnet' is a computer worm that was launched in July of last year with a destructive payload that had a defined target: Windows-based industrial systems. The worm was designed very specifically to attack only certain types of industrial systems;  like the ones that run nuclear plants.. Unlike most viruses and malware, Stuxnet does little harm to computers and networks that don't meet the explicit configuration requirements of its code. Like a laser sight on a snipers rifle, fingerprinting technology  allows Stuxnet to precisely identify the systems it infects  The creator of this worm took great care to ensure that only the designated target(s) were hit.  A tremendous and sophisticated effort was required to avoid collateral damage. 
 
What was the intended target? It is difficult to say for sure, but this much is known: 60% of the infected computers worldwide were in Iran. It is surely not a coincidence that Stuxnet infected the systems at two nuclear power plants that were hurriedly trying to enrich uranium.

The complexity of the code and the use of multiple programming languages contemplates the idea that only a -State or collection of States accessing  deep enough pockets and vast dedicated resources could have the collective skill to create and deploy such a focused cyber-weapon. Most of the blame falls on the U.S. or Israel, in particular, who would ostensibly have the most to gain by stopping or slowing the ability of the Iranians to get nuclear capability.

The supposition then is obvious: this cyber-weapon was created o do what conventional warfare and diplomacy could not by surreptitiously taking out enemy nuclear capabilities like a sniper in the night. Unlike the very public 2007 Israeli air force raid on a Syrian site that the Israelis claimed was a nuclear facility with a military purpose, the Stuxnet attack is a much lower profile attack.  The message is no less ambiguous than a full frontal assault and the effect just as valuable.  Coupled by the additional benefits of no human causalities, and no political fallout, cyber warfare appears to also be very, very cost effective.

From the limited test case of Stuxnet, we can easily extrapolate to an 1984-like world of cyber-warfare where instead of Oceania declaring war on Eurasia one week or Eastasia the following week, battles will instead be played out over DS3s, T1s and fiber optic networks.  Rather than sending one million expensively armed soldiers to invade an enemy, one simple mouse click could deploy a worm or virus that will shut down power grids, water systems or wreck havoc on international financial systems.

It may not be roller derby, but either way, Stuxnet presages the future of warfare.