Thursday, November 07, 2013

Opserver – Stack Exchange’s open source .NET monitoring system

A good find from jokecamp - a windows specific set of monitoring tools for the DevOps guys built and open sourced by the stackoverflow team.

"This is a repository to keep your eye on. I can easily see it become the devops dashboard of choice for .NET environments."

Just spotted Opserver – Stack Exchange’s open source .NET monitoring system | Joe's Code:

Wednesday, November 06, 2013

Every upgrade of iTunes on windows I need to fix sqlite3 error

So everytime there is a new version of iTunes I always have to fix an sqlite3 error. So to tell help me and anyone else with the same issue here is the solution I found on the following helpful page:

First the error is "The procedure entry point sqlite3_wal_checkpoint could not be located in the dynamic link library SQLite3.dll."

And the fix: Hummie's World of Digital Scrapbooking Tutorials: iTunes sqlite3 dll error - How to Fix

Create a Virtual Tape Library Using the AWS Storage Gateway

This looks interesting ...
 
 
Shared via feedly // published on Amazon Web Services Blog // visit site
Create a Virtual Tape Library Using the AWS Storage Gateway

The AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to integrate your on-premises IT environment with the AWS storage infrastructure.

Once installed and configured, each Gateway presents itself as one or more iSCSI storage volumes. Each volume can be configured to be Gateway-Cached (primary data stored in Amazon S3 and cached in the Gateway) or Gateway-Stored (primary data stored on the Gateway and backed up to Amazon S3 in asynchronous fashion).

Roll the Tape
Today we are making the Storage Gateway even more flexible. You can now configure a Storage Gateway as a Virtual Tape Library (VTL), with up to 10 virtual tape drives per Gateway. Each virtual tape drive responds to the SCSI command set, so your existing on-premises backup applications (either disk-to-tape or disk-to-disk-to-tape) will work without modification.

Virtual tapes in the Virtual Tape Library will be stored in Amazon S3, with 99.999999999% durability. Each Gateway can manage up to 1,500 virtual tapes or a total of 150 TB of storage in its Virtual Tape Library.

Virtual tapes in the Virtual Tape Library can be mounted to a tape drive and become accessible in a matter of seconds.

For long term, archival storage Virtual Tape Libraries are integrated with a Virtual Tape Shelf (VTS). Virtual tapes on the Virtual Tape Shelf will be stored in Amazon Glacier, with the same durability, but at a lower price per gigabyte and a longer retrieval time (about 24 hours). You can easily move your virtual tapes to your Virtual Tape Shelf, by simply ejecting them from the Virtual Tape Library using your backup application.

The virtual tapes are stored in a secure and durable manner. Amazon S3 and Amazon Glacier both make use of multiple storage facilities, and were designed to maintain durability even if two separate storage facilities fail simultaneously. Data moving from your Gateway to and from the AWS cloud is encrypted using SSL; data stored in S3 and Glacier is encrypted using 256-bit AES.

Farewell to Tapes and Tape Drives
As you should be able to tell from my description above, the Storage Gateway, when configured as a Virtual Tape Library, is a complete, plug-in replacement for your existing physical tape infrastructure. You no longer have to worry about provisioning, maintaining, or upgrading tape drives or tape robots. You don't have to initiate lengthy migration projects every couple of years, and you don't need to mount and scan old tapes to verify the integrity of the data. You can also forget about all of the hassles of offsite storage and retrieval!

In short, all of the headaches inherent in dealing with cantankerous mechanical devices with scads of moving parts simply vanish when you switch to a virtual tape environment. What's more, so does the capital expenditure. You pay for what you use, rather than what you own.

Looks Like Tape, Tastes Like Cloud
Here's a diagram to help you understand the Gateway-VTL concept. Your backup applications believe that they are writing to actual magnetic tapes. In actuality, they are writing data to the Storage Gateway, where it is uploaded to the AWS cloud:

Getting Started
The Gateway takes the form of a virtual machine image that you run on-premises on a VMWare or Hyper-V host. The Storage Gateway User Guide will walk you through the process of installing the image,  configuring the local storage, and activating your Gateway using the AWS Management Console:

As part of the activation process, you will specify the type of medium changer and tape drive exposed by the Gateway:

You will need to locate the Virtual Tape Drives in order to use them for backup. The details vary by operating system and backup tool. Here's what the discovery process looks like from the Microsoft iSCSI Initiator running on the system that you use to create backups:

Then you create some virtual tapes:

Backing Up and Managing Tapes
Once you locate the tape drives and tell your backup applications to use them, you can initiate your offsite backup process. You can find your Virtual Tapes in the AWS Management Console:

As you can see, the console provides you with a single, integrated view of all of your Virtual Tapes whether they are in the Virtual Tape Library and immediately accessible, or on the Virtual Tape Shelf, and accessible in about 24 hours.

Gateway in the Cloud
The Storage Gateway is also available as an Amazon EC2 AMI and you can launch it from the AWS Marketplace. There are several different use cases for this:

Perhaps you have migrated (or about to migrate) some on-premises applications to the AWS cloud. You can maintain your existing backup regimen and you can stick with tools that are familiar to you by using a cloud-based Gateway.

You can also use a cloud-based Gateway for Disaster Recovery. You can launch the Gateway and some EC2 instances, and bring your application back to life in the cloud.  Take a look at our Disaster Recovery page to learn more about how to implement this scenario using AWS.

Speaking of Disaster Recovery, you can also use a cloud-based Gateway to make sure that you can successfully recover from an incident. You can make sure that your backups contain the desired data, and you can verify your approach to restoring the data and loading it into a test database.

Bottom Line
The AWS Storage Gateway is available in multiple AWS Regions and you can start using it today. Here's what it will cost you:

  • Each activated gateway costs $125 per month, with a 60-day free trial.
  • There's no charge for data transfer from your location up to AWS.
  • Virtual Tapes stored in Amazon S3 cost $0.095 (less than a dime) per gigabyte per month of storage. You pay for the storage that you use, and not for any "blank tape" (so to speak).
  • Virtual Tapes stored in Amazon Glacier cost $0.01 (a penny) per gigabyte per month of storage. Again, you pay for what you use.
  • Retrieving data from a Virtual Tape Shelf costs $0.30 per gigabyte. If the tapes that you delete from the Virtual Tape Shelf are less than 90 days old, there is an additional, pro-rated charge of $0.03 per gigabyte.

These prices are valid in the US East (Northern Virginia) Region. Check the Storage Gateway Pricing page for costs in other Regions.

-- Jeff;

 


Monday, September 02, 2013

Whip Up Awesome w/the Chef Infrastructure Automation Cookbook [feedly]

This is great book. Not quite finished it yet but it's the first tech book I've purchased in a long while so that says something. 

I'm using it to help set up both development and production environments for a new stack and it's answered quite a few questions I've had about chef, vagrant and aws.

Recommended.
Shared via feedly // published on Opscode Blog // visit site
Whip Up Awesome w/the Chef Infrastructure Automation Cookbook

The Chef Community and its many awesome contributors keep doing amazing things. Case in point, our friend Matthias Marschall (a software engineer 'made in Germany' and CTO at gutefrage.net GmbH helping run Germany's biggest Q&A site) just published his new book "Chef Infrastructure Automation Cookbook".

Check out the synopsis:

Chef Infrastructure Automation Cookbook has all the required recipes to configure, deploy, and scale your servers and applications, irrespective of whether you manage 5 servers, 5,000 servers, or 500,000 servers.

Chef Infrastructure Automation Cookbook is a collection of easy-to-follow, step-by-step recipes showing you how to solve real-world automation challenges. Learn techniques from the pros and make sure you get your infrastructure automation project right the first time.

Available here, it's a dynamite Chef resource created by one of us in the Community. Chef has always been about a group of like-minded practitioners working together to help each other build better infrastructure and Matthias' new book keeps that tradition going strong.

Whether you're new to Chef or a long-time user, Matthias has something to teach all of us. All of us here at Opscode thank Matthias for the Herculean effort he put into this project and hope all of you in the Community benefit from what he's created.


Wednesday, August 28, 2013

Achievement Unlocked - Opscode Learn Chef Tutorial Followed

Nothing too fancy here just some basic Ruby, Git, Vagrant and VirtualBox in order to follow some steps on the Opscode Learn Chef site.



Next steps are to really get stuck in getting some useful environment setup. 

All this learning about Chef, Knife, Cookbooks and Recipes and I am now really hungry so time to put it down and I'll continue looking at this new shiny stuff tonight. Especially the Amazon Web Services bits. 

Saturday, August 24, 2013

DevOps side project - chef v puppet + vagrant + aws

So my side project is going to be an abstraction over some DevOps tasks with some opinion and hopefully best practise baked in with web and mobile/tablet clients to centrally manage these tasks, a nice reporting layer with data from various providers for developers and non developers to use. 

Ambitious I know but if it scratches my itch and helps me with my job then perhaps it will be useful for others.

I want to support more than just continuous delivery of .NET based apps to windows servers and to build on the knowledge I've gained so far and perhaps be less reliant on the tools that have helped me up to this point. 

For example deploying a Java, MySQL and Solr stack or a Ruby, Sinatra and MongoDB stack all with infrastructure as code in mind.

Chef vs Puppet

My "to read and learn about" list has for some time included  tools such as vagrant, chef and puppet, Chef and puppet do pretty much the same thing albeit in slightly different ways and are built on ruby and while not a language I've used in anger is nice to read and has plenty of resources available. Both tools have a good community and have cookbooks or sample configs for most things to get you started.


There isn't much to choose between them apart from a general feeling that Chef is more developer focused, does most of the work on the end nodes (actual machines) and has a steeper learning curve and puppet is more sys admin based and does more on the server side. 

Wether this is fair or not i do not know so being more developer by trade than sys admin i would  normally download both and see which is the best fit by trying the same basic tasks in both tools but this could end up a yak shaving excercise and I want to move as quickly as I can.

Amazon Web Services

So I've decided to just bite the bullet get cracking and pick Chef purely because my cloud provider of choice is Amazon Web Services and they have chosen Chef for its OpsWorks product and has resources for its other offerings cloudformation and elestic beanstalk.

If Amazon are investing infrastructure and time to it then so will I. 

Now with this decision made I re-read some of the documentation from Amazon and there are multiple solutions and API's that have come on leaps and bounds since I last looked and could really jump start my project. Everything from full control to a Heroku style system could be built however I do not want to build clones of Heroku or AppHarbor necessarily. 

Vagrant

Now I haven't mentioned much about vagrant but from the reading I've done it's just a no-brainer to have in my toolset for testing configurations and it has plug ins for both chef and puppet. I will blog about this tool when I start using it in anger.

Summary and linkage

Some more reading and playing around needed but I want to get started as soon as possible and be as lean as I can be. Time to decide on a set of features for iteration 1.

Anyway here are some links that started me off and I'll try and update as I move forward.

OpsWorks:
Chef:


CloudFormation:


Elastic Beanstalk:

Vagrant:

Puppet:

Friday, August 23, 2013

Wallet really empty now!

So now I'm all geared up for my multi-platform DevOps application. Apple and Xamarin licenses purchased. Web front end stack sorted sharing an API that I'll use for android and iOS applications.

Decided I couldn't afford a Mac license as well. maybe if the apps go from scratching my own itch to helping others I might dip my toe in the water.

Right here goes!


Creating Calca - A symbolic calculator with markdown for iOS and more [feedly]

Added to my "to listen to" list. covers all the boxes except DevOps for my interests right now.
 
 
Shared via feedly // published on Hanselminutes // visit site
Creating Calca - A symbolic calculator with markdown for iOS and more
Calca is a powerful symbolic calculator that gives you instant answers as you type. It was written by Frank Krueger (creator of iCircuit) using C# and Xamarin tools and is available today for iPhone, iPad, and Mac desktop - plus soon for Windows! How did Frank do it, and why?

Octopus 2.0: Health checks will now check for free disk space [feedly]

More goodness from octopus deploy. a situation that has happened to me a few times on test amazon boxes where the default disk size settings were selected, not a huge problem as you can create an ami with a larger virtual disk and launch a new instance with that resizing the partition

But it's a 30 minute job by the time it's back up and deploy-able to. Always better to catch these first and do a planned upgrade rather than a failed deployment letting you know.
 
 
Shared via feedly // published on Octopus Deploy // visit site
Octopus 2.0: Health checks will now check for free disk space

Hard disks are cheap, but running out of free space is a common problem when it comes to managing application servers, especially virtual servers. Octopus isn't meant to replace Nagios or other health monitoring tools, but we do have a basic health check that we run every 30 minutes against your servers. A good suggestion came up on UserVoice: report free disk space in the health check.

This is the health check summary in Octopus 2.0:

Health check summary showing a warning if disk space is low

Clicking through, you can see the details for all the fixed disks:

Health check summary showing a warning if disk space is low

As I said, this feature isn't meant to replace your existing server monitoring tools, but if you don't already have something in place, hopefully it's useful. Happy deployments!