Azedi Technology

- apache (1)
- cfengine (2)
- development (1)
- dns (1)
- fault tolerance (1)
- hosting (3)
- industry (2)
- infrastructure (11)
- jvm tuning (1)
- ldap (1)
- monitoring (1)
- puppet (1)
- redhat (1)
- syzygy (1)
- todo (5)
- tomcat (10)


kief  2006-08-26 12:03       

I've been much too quiet lately. I'm still hard at work putting together what I hope will be a very strong infrastructure for my company's application hosting operations, with about 15 servers for production, content management, and staging and testing.

One of the core components of this infrastructure is an OpenLDAP server, which I've been working on over the past week. Up until now it's been enough to have a couple of accounts which are created locally on all of the servers by puppet. I've got a chunk of disk space on a SAN which is shared across the machines, which is handy for having a common home area for key accounts I use to login and administer the machines, as well as the puppet templates and manifests.

kief  2006-07-19 06:17       

Tim O'Reilly, the boss of O'Reilly publishing and a key booster of the Web 2.0 meme, recently posted an article about operations.

One of the big ideas I have about Web 2.0 [is] that once we move to software as a service, everything we thought we knew about competitive advantage has to be rethought. Operations becomes the elephant in the room.

O'Reilly laments that most of the tools for deploying systems and applications on open source platforms (i.e. Linux) are not themselves open source. Luke Kaines and others have commented on the article with examples of open source deployment and operations management tools, including Puppet, and others I've mentioned for system configuration and network monitoring.

kief  2006-07-19 05:59       

This section has links and information about network monitoring tools. I've used Nagios a lot in the past 4 or 5 years, which is open source and pretty mature. It is mainly for detecting and reporting problems however, so it's useful to add something like Munin for tracking and graphing system resources and performance.

Another tool I'd like to try out for this is OpenNMS, which is written in Java, and includes the graphing as well as detection and reporting, and also auto-detects devices and services on a network.

kief  2006-06-23 13:39     

Lawrence Lessig declares that the two camps of the Net Neutrality debate are those who built the Net vs. those who never got it. I don't think that's accurate, the telecomms and cable networks have been a pretty key part of the Net. (Found via Rafe, btw).

I think the real division here is content providers vs. pipe owners, and the attempt to do away with network neutrality is essentially a coup attempt by the people who own the pipes.

People use the Net for the content, so that's where the value is. The pipes are just a commodity, which are expected to simply deliver the content. Selling a commodity service means competing on price, which means low margins.

kief  2006-06-13 05:49       

My first experience with the RedHat Network reminds me of a major limitation of commercial platforms that doesn't get much press. You actually get less than you do with free alternatives like apt-get and yum.

I'm setting up a new hosting infrastructure for a client which, among other things, involves moving from the free Fedora to commercial Redhat Linux. Although I've managed Redhat machines before, this is my first time using the Redhat Network (RHN) for installing and updating software.

In the past I've used apt-get on Debian, and yum on Fedora, and found them a godsend. Set up properly, it takes minimal effort to keep multiple systems up to date and consistent, whereas when I've had to go the "by-hand" route, machines invariably ended up with older versions of software. It's just too hard to keep up with all the various packages installed on various servers, not to mention the headache of chasing down various dependencies and resolving conflicts when you do upgrade or install a new package.

kief  2006-05-31 20:56     

I like to have the Tomcat manager webapp installed on each instance, so I can play with the webapps, and see how many active sessions there are. To do this, make a file called manager.xml file in the webapps directory of your Tomcat instance. One I like to use is this:

    <Context path="/manager"

    <ResourceLink name="users"

    <Valve className="org.apache.catalina.valves.RemoteAddrValve"

kief  2006-05-31 20:41     

Here's a brief step by step guide to running more than one instance of Tomcat on a single machine.

Step 1: Install the Tomcat files

Download Tomcat 4.1 or 5.5, and unzip it into an appropriate directory. I usually put it in /usr/local, so it ends up in a directory called /usr/local/apache-tomcat-5.5.17 (5.5.17 being the current version as of this writing), and make a symlink named /usr/local/tomcat to that directory. When later versions come out, I can unzip them and relink, leaving the older version in case things don't work out (which rarely if ever happens, but I'm paranoid).

kief  2006-05-31 19:16       

I've started tinkering with puppet for configuration management. It's a far more flexible and extensible tool than cfengine, so it looks like the best way to go.

It's main drawback is lack of maturity. The documentation is fair, there's a decent reference, but there are only two examples of configuration files that I've seen so far, and neither one is very complex. It's also fairly buggy, although the author is quick to respond when told about specific problems.

I'll most likely be using Puppet to build a J2EE infrastructure based on Red Hat. I'd like to be able to contribute bug fixes, but I'm not sure how many spare cycles I'll have, given that I don't know Ruby. But hopefully I can at least contribute some example files, and some manifests related to Tomcat and general J2EE web application deployments.

kief  2006-05-27 17:55       

There are a lot of things you can do to make sure that when disaster strikes, you can get back online. Even in environments where you don't have automatic failover, you can take some basic steps so that when you get the alert or the phone call, you can bring things back online.

Let's say you have a single server running a web application with a local database. However, you need to have a second server available. Maybe it's doing something else normally, maybe it's in a less than ideal location, like in your office at the end of a slower Net connection, but as long as you can fire up your application, repoint DNS, and be online, it'll do in a pinch.

kief  2006-05-27 17:43     

Why would someone pay £500 per month for a server when they could pay less than £100 from a different provider? The short answer is support. But it's a more complicated story than that.

One of my clients had a key server die this week, one of many they have with a cheap hosting provider. After investigating a bit, it was clear that it was not a system error - I was briefly able to examine the system logs using the provider's web based recovery tool, with no evidence of problems, but then the server stopped responding even to the tool.

So I called support. The first-line support guy verified that the machine wasn't responding and couldn't be recovered with the online tool, so he referred it to engineering at the data center.


Recommended Services

Control Your Domain (white background)


Syndicate content