What I Learned at CloudCamp
Though delayed (1 week) due to snow last week, CloudCamp Indianapolis went off without a hitch tonight. If you’re not from Indianapolis – you should keep on reading. CloudCamp is relatively new and held in major cities all over the globe. Thanks to the subject matter expertise and industry leadership of BlueLock, we held a successful event right here in Indy.
If you’re wondering what Cloud Computing is, Bluelock has provided some discussion of defining this rather nebulous term.
Cloud Computing in Indianapolis?
Indianapolis is getting the attention nationally and internationally because of the low, stable costs associated with power and real estate – two huge factors in determining hosting costs. Additionally, our weather is solid and we’re an intersection across major backbones of the Internet in North America. If you’re hosting your application in a California data warehouse right now – you may want to take a look!
BlueLock is a Leader Internationally in Cloud Computing
I have to be honest, the more I hear Pat O’Day speak, the more intimidated about how much that guy knows about cloud computing, utility computing, grid computing, data warehouse management, Virtualization, VMWare… you name it and that guy knows it. He’s soft spoken, gracious, and has the uncanny ability to speak to us folk that are not tech savvy in that industry!
I’m not discounting others on the team! John Qualls and Brian Wolff are great friends but tonight Pat was in the spotlight.
Break Out Sessions: App Scalability
One of the sessions I attended was lead by Ed Saipetch. Ed worked at The Indianapolis Star when I did and built out much of the scalability and applications at the newspaper. He pulled off some magic back then – had little resources and a lot of demands building enterprise applications on razor thin budgets.
Ed shared a ton about newer tools that can be used for automated load testing and application speed testing as well as a healthy discussion of architecture and what it means by growing vertically and scaling horizontally. I really enjoyed the conversation.
Sharding is actually a technical term?
[Insert Beavis and Butthead laugh]We even discussed sharding, a term that I had only reserved for bathroom humor that I saw in a movie once. Sharding is actually a means of scaling your application, rather barbarically, simply by creating new database copies and pushing customers to different databases to alleviate the pain of hitting a single database all the time.
Break Out Session: Cloud ROI
The costs associated with cloud computing can vary widely – from virtually nothing to systems that are highly monitored and strongly secured. BlueLock’s flavor is Infrastructure as a Service – where you can basically outsource all the headaches of Infrastructure to their team so you can concentrate on deployment and growth!
I went into the Return on Investment conversation thinking that we were going to have a very intense lesson in analysis of the resources necessary for traditional versus cloud hosting. Instead, Robby Slaughter led an outstanding discussion of the pros and cons of both and talked about risk mitigation.
Risk is a number that most companies can put some numbers on… how much will it cost if you can’t grow instantaneously? How much will it cost if you go down and need to bring a restored environment back up? These costs, or lost revenue, can overshadow the nickels and dimes analyzed in a traditional comparison.
Special thanks to BlueLock for a wonderfully hosted event (pun intended). I couldn’t wait to come home and blog about sharding.
“We even discussed sharding, a term that I had only reserved for bathroom humor that I saw in a movie once.”
I laughed so hard, I kinda sharded a little.
Again, [Insert Beavis and Butthead laugh]
Thanks for the plug, Doug! Cloudcamp was a great event.
I wasn’t in Ed’s talk about sharding, but I thought I would clarify that this approach isn’t necessarily “barbaric.” Usually, sharding refers to breaking your database apart along application-specific fault lines. For example, if data from one customer never impacts data from another customer, you could divide your main database into two parts: A-L and M-Z.
To storage guys (like Ed) this is kind of a crude solution, because it means you have to maintain multiple databases that are effectively structured the same way. But it’s a great way to increase performance without adding much cost!
Crude may be a better word, Robby. Great clarification – it’s a viable solution, just a bit of a brute force one.
“Brute force” is right. But you know the old saying: “If brute force doesn’t work, maybe you’re not using enough!”