Azedi Technology - en Whipping up a solid LDAP infranstructure <!-- google_ad_section_start --> <p>I've been much too quiet lately. I'm still hard at work putting together what I hope will be a very strong infrastructure for my company's application hosting operations, with about 15 servers for production, content management, and staging and testing.</p> <p>One of the core components of this infrastructure is an <a href="">OpenLDAP</a> server, which I've been working on over the past week. Up until now it's been enough to have a couple of accounts which are created locally on all of the servers by puppet. I've got a chunk of disk space on a SAN which is shared across the machines, which is handy for having a common home area for key accounts I use to login and administer the machines, as well as the puppet templates and manifests.</p> <p>I've added the LDAP server not so much for the server login accounts as for some of the services that we're putting in place for use by the company, in particular a wiki and bug tracking. Rather than having user accounts scattered across various applications and services, LDAP means everyone can have just one username, and one password, and most things that require user accounts are able to integrate with it.</p> <p>LDAP does have some limitations. There are no really polished admin tools, most of what's out there is pretty rough and ready, with serious gaps. There's a reasonable collection of links to <a href="">ldap tools</a> on the bind9 site.</p> <p>The biggest gap for me is letting users manage their own accounts, especially resetting passwords. Most of the users in my directory won't have login accounts on the servers, so I can't just whip up a script that they can use like "passwd".</p> <p>I've settled on <a href="">Gosa</a>, which is actually a pretty nice, if some sketchily documented, tool. Once you've got it configured, it lets you create, edit, and delete users and groups, as well as other directory thingies I'm not using yet like machines and applications. Users can also be permitted to log in and edit fields which you enable, including the password. This is great, because it means users can change their own passwords from the Web UI. </p> <p>Gosa is missing one feature I need, a "lost password" capability. You can only change your password if you know yoru current one. I intend to write a little script that takes a username, generates and sets a random password using ldappasswd, and emails it to the address in their directory entry with a little templated email message. </p> <p>When a user forgets their password I'll have to run this script by hand, but it'll be quick and simple to run it. If I felt brave I could wrap it in a CGI script to let users run it themselves, but that would open it up to abuse. Even though it wouldn't let a baddy steal accounts, it would let them annoy users by resetting passwords.</p> <p>Alexander Prohorenko wrote a decent article for ONLamp about <a href="">setting up Gosa</a>, which I found very helpful.</p> <!-- google_ad_section_end --> infrastructure ldap Sat, 26 Aug 2006 05:03:09 -0700 kief 29 at The cool kids talk about operations <!-- google_ad_section_start --> <p>Tim O'Reilly, the boss of O'Reilly publishing and a key booster of the <a href="">Web 2.0 meme</a>, recently posted <a href="">an article about operations</a>.</p> <p>One of the big ideas I have about Web 2.0 [is] that once we move to software as a service, everything we thought we knew about competitive advantage has to be rethought. Operations becomes the elephant in the room.</p> <p>O'Reilly laments that most of the tools for deploying systems and applications on open source platforms (i.e. Linux) are not themselves open source. Luke Kaines and others have commented on the article with examples of open source deployment and operations management tools, including <a href="">Puppet</a>, and others I've mentioned for <a href="/infrastructure-tools/index.html">system configuration</a> and <a href="/infrastructure-tools/network-monitoring.html">network monitoring</a>. </p> <p>I think <a href="">Luke's blog post</a> makes a fair point, that there is more activity in this area than O'Reilly gives credit to, and in fact, the OSCon sponsored by O'Reilly Publishing had turned down Luke's proposal for a talk on Puppet. <a href="">Luke reports</a> that Tim O'Reilly emailed him directly in response to this, and gave him a slot on the OSCon speaker's schedule.</p> <p>So this post by O'Reilly should mark a turning point. By expresing his interest in the subject of operations, O'Reilly has invited the commentary from people who are passionate about it, giving himself an opportunity to perhaps learn more about what's out there. I hope we'll see some new books in this area come out from O'Reilly. </p> <p>I wouldn't expect that Puppet has a wide enough user base to carry a title of its own (yet), but a guide to managing systems and networks with open source tools would be cool, and would fit in well with O'Reilly's existing catalog of systems administration titles.</p> <p>I also very much like <a href="">Dominic Mitchell</a>'s suggestion that there is a need to work out application deployment patterns. I'd add that patterns for infrastructure operations in general would be really useful, something I'd like to think about a lot more.</p> <!-- google_ad_section_end --> industry infrastructure Tue, 18 Jul 2006 23:17:50 -0700 kief 28 at Network Monitoring Tools <!-- google_ad_section_start --> <p>This section has links and information about network monitoring tools. I've used <a href="">Nagios</a> a lot in the past 4 or 5 years, which is open source and pretty mature. It is mainly for detecting and reporting problems however, so it's useful to add something like <a href="">Munin</a> for tracking and graphing system resources and performance.</p> <p>Another tool I'd like to try out for this is <a href="">OpenNMS</a>, which is written in Java, and includes the graphing as well as detection and reporting, and also auto-detects devices and services on a network. </p> <p><a href="">Hyperic</a> is another open source tool in this category I'd like to learn more about.</p> <!-- google_ad_section_end --> <div class="book-navigation"><div class="page-links"><a href="/hosting/providers.html" class="page-previous" title="Go to previous page">‹ Hosting Providers</a><a href="/hosting-infrastructure-resources.html" class="page-up" title="Go to parent page">up</a><a href="/infrastructure-tools/index.html" class="page-next" title="Go to next page">System management tools ›</a></div></div> infrastructure monitoring Tue, 18 Jul 2006 22:59:22 -0700 kief 27 at I disagree with Lessig's evaluation of the Net Neutrality camps <!-- google_ad_section_start --> <p><a href="">Lawrence Lessig declares</a> that the two camps of the Net Neutrality debate are those who built the Net vs. those who never got it. I don't think that's accurate, the telecomms and cable networks have been a pretty key part of the Net. (Found via <a href="">Rafe</a>, btw).</p> <p>I think the real division here is content providers vs. pipe owners, and the attempt to do away with network neutrality is essentially a coup attempt by the people who own the pipes.</p> <p>People use the Net for the content, so that's where the value is. The pipes are just a commodity, which are expected to simply deliver the content. Selling a commodity service means competing on price, which means low margins.</p> <p>So now the owners of the pipes are trying to restructure the market so they can charge content owners for preferential delivery of their content. This would overturn the current situation in their favor.</p> <p>I'm doubtful this will work. Restructuring a market to foil the natural flow of the supply and demand of value is like trying to turn back the tide. There are only two ways it will work, barring government intervention (i.e. paying the politicians to restructure the market for them).</p> <p>The first way is if the Unequal Net really does offer value. So if the content providers really feel a need to pay extra to get faster delivery to customers, they'll pay for it, and the Unequal Net will happen. </p> <p>The second way is if the telecomms companies set up the Unequal Net, and degrade normal network service to the point where content providers will see the value of paying for the higher tier. </p> <p>The first way is not going to happen. I've been providing technical infastructure to content providers for 10 years, and I just don't see them paying extra to get faster network to their users. It just isn't a huge pain point, and when it is the focus is how to reduce bandwidth costs, not increase them.</p> <p>The second way could happen. The current political debate is around exactly this, with the telecomms companies vigorously denying they would slow down normal, non-premium network traffic. Instead they say they will be adding new capacity, and leave the existing Net as it is. </p> <p>This is disingenuous, of course. As Internet usage continues to grow, with bandwidth hogging applications like video, the existing pipes will become more and more overloaded. Where are the telecomms companies going to invest in expanding capacity, the commodity, low-margin regular Internet, or in the new, high capacity, high cost infrastructure?</p> <p>One would hope the market would overcome this strategy of blackmail. But there is a class of content providers who will need faster, more reliable bandwidth as they move more seriously into the Net as a distribution channel. They are the same content providers who are frightened of the Net as an open, level playing field. And they are content providers with deep pockets.</p> <p>So it's easy to imagine a future market with two levels. The premium infrastructure will be for big-corp commercial content, TV, movies, music, and gaming, probably heavy with (ineffectual as always) DRM. The other level will be for the rest of us, the grass-roots, the non-commercial groups, individuals, and micro-businesses. </p> <p>I don't think this would necessarily have an unhappy ending. That grass-roots market is huge, and as users we will will demand reasonable quality network access, so somebody will be able to make money providing it.</p> <p>But I think the irony of the situation is that the telecomms companies who will eagerly build the infrastructure for the premium network in hopes of getting a bigger slice of the pie will in the end find themselves again competing for business strictly on price. </p> <p>At the end of the day, people are interested in the content, and pipes are just pipes. If one telecomms company tries to hold Disney to ransom, they'll just switch to another.</p> <p><strong>Links:</strong></p> <p><a href="">Tim Berners Lee</a> on Net Neutrality. Is it possible to have a better opening line?</p> <!-- google_ad_section_end --> industry Fri, 23 Jun 2006 06:39:47 -0700 kief 26 at Red Hat Network: How can they charge for less than you can get for free? <!-- google_ad_section_start --> <p>My first experience with the RedHat Network reminds me of a major limitation of commercial platforms that doesn't get much press. You actually get less than you do with free alternatives like apt-get and yum.</p> <p>I'm setting up a new hosting infrastructure for a client which, among other things, involves moving from the free Fedora to commercial Redhat Linux. Although I've managed Redhat machines before, this is my first time using the <a href="">Redhat Network</a> (RHN) for installing and updating software. </p> <p>In the past I've used <a href="">apt-get</a> on Debian, and <a href="">yum</a> on Fedora, and found them a godsend. Set up properly, it takes minimal effort to keep multiple systems up to date and consistent, whereas when I've had to go the "by-hand" route, machines invariably ended up with older versions of software. It's just too hard to keep up with all the various packages installed on various servers, not to mention the headache of chasing down various dependencies and resolving conflicts when you do upgrade or install a new package.</p> <p>So faced with using RHN, I made the typical assumption of a user who is "upgrading" from a free system to a commercial one, i.e. I assumed it would be better. After all, if you're paying a company for it, it'll be better quality than something maintained by a bunch of volunteers, won't it? I should know better, having seen horrors that go on inside the closed-door sausage factories of commercial software development groups.</p> <p>So what's my bitch this time? Well, there's not as much software available. The first thing I wanted to do with my new Redhat boxes was checkout my puppet manifests and related configuration code from my subversion repository. To do this, I needed the subversion client but sadly, RHN (at least for Redhat Enterprise 3) doesn't have subversion. </p> <p>It might be available from channels that I don't have access to, I don't know. If so, it's an example of the quicksand that I always seem to find myself mired in with commercial software. Whenever I get saddled with Microsoft Windows servers, I end up wanting to do things that I could do for free on Linux, but can't do without paying thousands of pounds extra with Microsoft, buying add-ons, third-party software, not to mention "Enterprise" editions of everything when I want to run it on more than one server, all multiplied by CPU's of course. And, oh yeah, we need licenses for our development and staging servers as well. Bleh.</p> <p>So for now I'm going back to the old fashioned days of searching and downloading packages. So far I'm up to 10 packages I've had to add manually, and will have to keep up to date across 15 or more machines.</p> <p>I'm considering installing yum and just ditching RHN entirely. I guess this will effectively turn my systems into Fedora boxes, and might have weird side effects. So far I haven't found much commentary out on the web related to this situation.</p> <!-- google_ad_section_end --> hosting redhat Mon, 12 Jun 2006 22:49:01 -0700 kief 25 at Configuring the Tomcat manager webapp <!-- google_ad_section_start --> <p>I like to have the Tomcat manager webapp installed on each instance, so I can play with the webapps, and see how many active sessions there are. To do this, make a file called manager.xml file in the webapps directory of your Tomcat instance. One I like to use is this:</p> <pre> &lt;Context path="/manager" docBase="/usr/local/tomcat/server/webapps/manager" debug="0" privileged="true"&gt; &lt;ResourceLink name="users" global="UserDatabase" type="org.apache.catalina.UserDatabase"/&gt; &lt;Valve className="org.apache.catalina.valves.RemoteAddrValve" allow=","/&gt; &lt;/Context&gt; </pre><p> The key bit is the <em>docBase</em> attribute, which needs to point to the webapp in the Tomcat installation directory. I add a RemoteAddrValve to keep evil people from trying to break into the manager.</p> <p>You'll also need to add a user account with permission to use the manager. Put a file called tomcat-users.xml into the conf directory of the Tomcat instance, which should have something like the following:</p> <pre> &lt;tomcat-users&gt; &lt;role rolename="manager"/&gt; &lt;user username="admin" password="hard2Guess" roles="manager"/&gt; &lt;/tomcat-users&gt; </pre><p> Finally, your server.xml file needs to have a UserDatabase configured. This is in the example configuration files from the Tomcat installation.</p> <!-- google_ad_section_end --> <div class="book-navigation"><div class="page-links"><a href="/tomcat-setup/basic-tightening.html" class="page-previous" title="Go to previous page">‹ Tightening the Default Tomcat Configuration</a><a href="/tomcat-setup/index.html" class="page-up" title="Go to parent page">up</a><a href="/tomcat/multiple-tomcat-instances.html" class="page-next" title="Go to next page">Running multiple Tomcat instances on one server ›</a></div></div> tomcat Wed, 31 May 2006 13:56:48 -0700 kief 24 at Running multiple Tomcat instances on one server <!-- google_ad_section_start --> <p>Here's a brief step by step guide to running more than one instance of Tomcat on a single machine.</p> <p><img src=""/></p> <h4>Step 1: Install the Tomcat files</h4> <p>Download Tomcat <a href="">4.1</a> or <a href="">5.5</a>, and unzip it into an appropriate directory. I usually put it in /usr/local, so it ends up in a directory called <em>/usr/local/apache-tomcat-5.5.17</em> (5.5.17 being the current version as of this writing), and make a symlink named /usr/local/tomcat to that directory. When later versions come out, I can unzip them and relink, leaving the older version in case things don't work out (which rarely if ever happens, but I'm paranoid).</p> <h4>Step 2: Make directories for each instance</h4> <p>For each instance of Tomcat you're going to run, you'll need a directory that will be <em>CATALINA_HOME</em>. For example, you might make them <em>/var/tomcat/serverA</em> and <em>/var/tomcat/serverB</em>. </p> <p>In each of these directories you need the following subdirectories: conf, logs, temp, webapps, and work.</p> <p>Put a server.xml and web.xml file in the conf directory. You can get these from the conf directory of the directory where you put the tomcat installation files, although of course you should tighten up your server.xml a bit.</p> <p>The webapps directory is where you'll put the web applications you want to run on the particular instance of Tomcat. </p> <p>I like to have the Tomcat manager webapp installed on each instance, so I can play with the webapps, and see how many active sessions there are. See my instructions for <a href="/tomcat/configurating-tomcat-manager.html">configuring the Tomcat manager webapp</a>.</p> <h4>Step 3: Configure the ports and/or addresses for each instance</h4> <p>Tomcat listens to at least two network ports, one for the shutdown command, and one or more for accepting requests. Two instances of Tomcat can't listen to the same port number on the same IP address, so you will need to edit your server.xml files to change the ports they listen to.</p> <p>The first port to look at is the shutdown port. This is used by the command line shutdown script (actually, but the Java code it runs) to tell the Tomcat instance to shut itself down. This port is defined at the top of the server.xml file for the instance.</p> <pre> &lt;Server port="8001" shutdown="_SHUTDOWN_COMMAND_" debug="0"&gt; </pre><p> Make sure each instance uses a different port value. The port value will normally need to be higher than 1024, and shouldn't conflict with any other network service running on the same system. The shutdown string is the value that is sent to shut the server down. Note that Tomcat won't accept shutdown commands that come from other machines.</p> <p>Unlike the other ports Tomcat listens to, the shutdown port can't be configured to listen to its port on a different IP address. It always listens on</p> <p>The other ports Tomcat listens to are configured with the &lt;Connector&gt; elements, for instance the HTTP or JK listeners. The <em>port</em> attribute configures which port to listen to. Setting this to a different value on the different Tomcat instances on a machine will avoid conflict. </p> <p>Of course, you'll need to configure whatever connects to that Connector to use the different port. If a web server is used as the front end using mod_jk, mod_proxy, or the like, then this is simple enough - change your web server's configuration. </p> <p>In some cases you may not want to do this, for instance you may not want to use a port other than 8080 for HTTP connectors. If you want all of your Tomcat intances to use the same port number, you'll need to use different IP addresses. The server system must be configured with multiple IP addresses, and the <em>address</em> attribute of the &lt;Connector&gt; element for each Tomcat instance will be set to the appropriate IP address.</p> <h4>Step 4: Startup</h4> <p>Startup scripts are a whole other topic, but here's the brief rundown. The main different from running a single Tomcat instance is you need to set CATALINA_BASE to the directory you set up for the particular instance you want to start (or stop). Here's a typical startup routine:</p> <pre> JAVA_HOME=/usr/java JAVA_OPTS="-Xmx800m -Xms800m" CATALINA_HOME=/usr/local/tomcat CATALINA_BASE=/var/tomcat/serverA export JAVA_HOME JAVA_OPTS CATALINA_HOME CATALINA_BASE $CATALINA_HOME/bin/ start </pre></p> <!-- google_ad_section_end --> <div class="book-navigation"><div class="page-links"><a href="/tomcat/configurating-tomcat-manager.html" class="page-previous" title="Go to previous page">‹ Configuring the Tomcat manager webapp</a><a href="/tomcat-setup/index.html" class="page-up" title="Go to parent page">up</a></div></div> tomcat Wed, 31 May 2006 13:41:41 -0700 kief 23 at Configuration management with Puppet <!-- google_ad_section_start --> <p>I've started tinkering with <a href="">puppet</a> for configuration management. It's a far more flexible and extensible tool than cfengine, so it looks like the best way to go. </p> <p>It's main drawback is lack of maturity. The documentation is fair, there's a decent reference, but there are only two examples of configuration files that I've seen so far, and neither one is very complex. It's also fairly buggy, although the author is quick to respond when told about specific problems.</p> <p>I'll most likely be using Puppet to build a J2EE infrastructure based on Red Hat. I'd like to be able to contribute bug fixes, but I'm not sure how many spare cycles I'll have, given that I don't know Ruby. But hopefully I can at least contribute some example files, and some manifests related to Tomcat and general J2EE web application deployments.</p> <p>Assuming I do use Puppet for this project, I'll try to post information here as I go along, in addition to the project itself.</p> <!-- google_ad_section_end --> <div class="book-navigation"><div class="page-links"><a href="/infrastructure-tools/cfengine.html" class="page-previous" title="Go to previous page">‹ cfengine</a><a href="/infrastructure-tools/index.html" class="page-up" title="Go to parent page">up</a><a href="/infrastructure-tools/cfengine-alternatives.html" class="page-next" title="Go to next page">cfengine alternatives ›</a></div></div> infrastructure puppet Wed, 31 May 2006 12:16:46 -0700 kief 22 at Ready for disaster <!-- google_ad_section_start --> <p>There are a lot of things you can do to make sure that when disaster strikes, you can get back online. Even in environments where you don't have automatic failover, you can take some basic steps so that when you get the alert or the phone call, you can bring things back online.</p> <p>Let's say you have a single server running a web application with a local database. However, you need to have a second server available. Maybe it's doing something else normally, maybe it's in a less than ideal location, like in your office at the end of a slower Net connection, but as long as you can fire up your application, repoint DNS, and be online, it'll do in a pinch.</p> <p>First, make sure you have the base server software ready, so your web, application, and database software are installed. </p> <p>Second, make sure you have a copy of your application code and configuration files handy. I always like to have these in source control, on a server other than my live one, so in the worst case I can pull them down to my emergency location.</p> <p>Third, you need your live data, that is, your database contents. Take frequent dumps of the data and have them handy, again away from your live server. Do this outside your system backups, use your database tools such as mysqlbackup to dump a file, then zip it and ship it somewhere else. How frequently you do this depends on how often the data changes, and how important it is to have fresh data. In the most extreme case, you might have the database continually dumping a log to a shared file store, where the backup server is reading it in.</p> <p>A sticking point may be the DNS. You can change the DNS, but users will have the old DNS information cached. How long it takes for them to get the new IP address depends on the TTL you have set in your DNS configuration, and changing this to a lower value after the crash ain't gonna help. A two hour TTL is probably a good setting.</p> <p>Of course, better yet is if you have multiple servers behind a firewall and/or load balancer, so you don't need to change your DNS at all, just reconfigure and go. But if you're running a budget setup, these are simple steps to follow to make disasters a little less stressful.</p> <!-- google_ad_section_end --> fault tolerance infrastructure Sat, 27 May 2006 10:55:43 -0700 kief 21 at Why cheap hosting services are so cheap <!-- google_ad_section_start --> <p>Why would someone pay £500 per month for a server when they could pay less than £100 from a different provider? The short answer is support. But it's a more complicated story than that.</p> <p>One of my clients had a key server die this week, one of many they have with a cheap hosting provider. After investigating a bit, it was clear that it was not a system error - I was briefly able to examine the system logs using the provider's web based recovery tool, with no evidence of problems, but then the server stopped responding even to the tool.</p> <p>So I called support. The first-line support guy verified that the machine wasn't responding and couldn't be recovered with the online tool, so he referred it to engineering at the data center. </p> <p>However, he couldn't tell me when they would investigate the issue. It depends on how busy they are. With a real hosting provider, you get an SLA that includes response times. You also get someone who will communicate with you, to make you feel like they're on the case. I only got the promise that I would get an email when it was fixed, no direct contact, no phone call.</p> <p>In the end it took three hours for them to fix the server. This isn't an unreasonable amount of time, given the cost of the service, but during those three hours my client decided that the three new servers they had decided to add that very morning should be gotten from another provider instead. They'll probably end up paying five times the price, but they know they'll get better support.</p> <p>The key point here isn't that the service was poor. They fixed the problem fairly quickly. It isn't even just that they could not promise a response time - with thousands of customers paying less than a hundred pounds a month, they can't afford to make guarantees. </p> <p>But even a cheap and cheerful support organisation should have a way to give updates on their progress, even if it is just via semi-automated emails. Let me know when your engineers have started investigating the problem. Let me know when they've identified the cause, and when they've fixed it.</p> <p>The frosting with these guys came when they sent the form-email that the problem was fixed. I replied, asking if they could tell me what went wrong. The reply came the next day: look at your server logs.</p> <p>That's lame. That's passing the buck. It's not in the server logs, because it's not an OS problem. Hard booting didn't fix the issue, and whatever did fix it didn't involve changing any configuration files or such on the server. So it was a hardware or network issue of some kind. </p> <p>I guess I'll never know. And I guess I'll never use these guys again.</p> <!-- google_ad_section_end --> hosting Sat, 27 May 2006 10:43:32 -0700 kief 20 at