Sep. 28th, 2011 09:40 am


zorkian: Icon full of binary ones and zeros in no pattern. (Default)
Of late, I've switched to using tmux instead of screen. It is functionally the same thing -- a terminal multiplexer -- but in my usage it works a lot better, more predictably, and is easier to use. The hardest part is getting used to ^b as the shortcut, but you can change that to screen's ^a if you'd prefer.

One of the coolest things I've seen is something I accidentally ran into today. When you have multiple terminals attached, tmux does the intelligent thing and constrains the size of everybody to the size of the smallest. This allows you to easily share your screen without having to muck about with making sure everybody is using the same size terminal. It also gives you a very visual display showing what is going on.

When I ran into this feature, though, I was confused. Today when I ran tmux attach my terminal came up and the right side was a whole host of dashes. It looked a lot like a pane of some sort, but nothing I did in the options would make it go away.

A quick ask on #tmux on Freenode and someone pointed me to the ^b D command. It allows you to remote detach other terminals that are listening in.

Anyway, if you are a big screen user, you should check it out. It's pretty easy to get used to and seems to work better than screen. Don't get me wrong, I don't have any major complaints about screen, but tmux is a very interesting alternative.

Edit: I've been told this feature has been implemented in screen as well. It seems my experience was based on a really old version of the software. Moral of the story: comparing software from several years ago to a competitor from today is not a particularly great comparison.
zorkian: Icon full of binary ones and zeros in no pattern. (Default)
I have now learned a lot more about the present state of the art of high availability in Linux than I ever knew. Awesome, knowledge!

The best guide I've found so far for understanding the Pacemaker, etc configuration is here:

It apparently was just released and is currently in draft state. It's from OpenSUSE but it's applicable to anybody who is using this stack of technologies. I.e., you can use it just fine for your Ubuntu setup, which is what I'm doing here at work.

I also recommend this blog post if you are coming from heartbeat and want more of a "and here's how to step into the present":

STONITH configuration (using IPMI) was a lot harder than I expected, but I found this useful for getting started with it:
zorkian: Icon full of binary ones and zeros in no pattern. (Default)
Having now spent the day working on our Pacemaker installation, I have to wonder: is it natural that as things evolve, they become more and more complicated? And, in this case at least, accrete XML (dammit) as it grows up.

I seem to recall that heartbeat version 2 was much simpler and easier to use. Now, there's this complicated XML configuration system that makes life more difficult than it needs to be. I'm sure it was a reasonable decision at the time, but meh.

Right now I'm mired in getting a STONITH setup to work. Which, hilariously, is apparently the way to handle node failures. STONITH is an acronym for "Shoot The Other Node In The Head" and it is exactly what you think it is. If $A fails, then $B can issue a STONITH and reboot $A. The catch is that you have to build rules such that $A's STONITH daemon does not actually run on $A.

I called these rules "$A-no-suicide" because, hell, what else would you call them?

At the very least, today has been amusing. If not productive, sadly, given my IPMI systems are not cooperating with me and not allowing me to run commands remotely. Something about privilege level problems. Not sure why yet.

Pro tip for working with Pacemaker: the 'show' command gives you a much easier syntax to work with for enacting configuration changes in the CRM tool's configure mode. I.e., from your root console, type crm configure show and enjoy the much more readable output.

You can use the same format for making configuration additions. It doesn't seem to work for changing attributes, though, so you'll have to go back to crm_resource and friends for that.
zorkian: Icon full of binary ones and zeros in no pattern. (Default)
I'm putting this out here so that someone can hopefully find this post when they need help. I was setting up Puppet on a machine and getting a weird error:

Neither PUB key nor PRIV key:: header too long

This error means that the disk is full. Clear up some space, remove /var/lib/puppet/ssl (the directory is corrupt and partially written now), and then try again.
zorkian: Icon full of binary ones and zeros in no pattern. (Default)
Amazon's Cloud Drive is a neat product. Aside from the notable lack of iOS support, it does basically everything I want it to do, even if part of my setup is clunky. Let me elaborate for a few minutes, though...

For the unaware, this product is basically a user friendly frontend bolted on top of their S3 storage service with a few additional restrictions. They give you a system that lets you host music files (of a few formats) and then play them back from wherever you are. Additionally, they give you the ability to download your files as many times as you want, in case you want to make local copies, sync them to a device, or similar.

The major restriction on the service is that they do not expose the normal S3 APIs. You can't access your Cloud Drive through anything other than the Flash based frontend they have built. It works in all modern browsers, though, on all major operating systems. Desktop compatibility is not a problem. They even have a snazzy upload client that lets you easily get music into the system so you don't have to dig around and upload files one by one.

I've been playing around with it for a bit now in three configurations. Here's my take...

Laptop, Mac, Chrome. This is the ideal configuration. The interface they have built is pretty easy to use and lets me do everything I've wanted to do (so far). Building a playlist is pretty easy, finding songs/albums/artists, and playing music all work as you'd expect. Downloading files is easily available -- you can even download multiple at once and it uses the Amazon MP3 Downloader they use for the store. Nice.

Mobile, Android phone. Using the Amazon MP3 application (free), accessing your Cloud Drive is a piece of cake (not the type that is a lie). You can access content you've uploaded from your computer and then play it -- playlists, albums, artists, etc. They recommend WiFi and of course you are responsible for any over-the-air charges, but in my testing it worked pretty well. I had one song that started skipping, but I expect a carrier (Sprint) issue as I was driving through the hills along 280.

Server, Ubuntu Linux. This was the hardest part to set up. I have a server that sits in my living room that handles media for me. I don't have a monitor on it as I prefer to do everything through the console. Unfortunately, a Flash based system means you can't do that. It took about an hour of fiddling with things to get Gnome, X, and a VNC server installed. Once that was done, I was able to use Chicken of the VNC on my Mac to log in and run Firefox. The experience was then the same as on my laptop.

Overall, Amazon has a solid service here, as we've come to expect from their cloud offerings. Using this service costs you nothing (for the first 5GB) and there is no lock in. You can easily try it out, see if it works for you, and then abandon it later if you want.

One more neat thing worth mentioning: when you buy music on the Amazon MP3 Store, you are given the option to just upload the music to your Cloud Drive directly. If you do this, those songs never count against your quota. Theoretically, if you only ever buy music from Amazon, you can count it as a free online storage and playing system that lets you download your music later, as many times as you want. That's a pretty good deal, and a compelling argument for buying from Amazon.
zorkian: Icon full of binary ones and zeros in no pattern. (Default)
Something I've now run into twice in my time at StumbleUpon, but doesn't seem to be noted anywhere online that I can find:

Short story: Removing the MySQL relay logs (and only the relay logs) is unsafe and will likely render your slave useless (missing data). This is true as of MySQL 5.0.

Now, let's talk about the longer version of the story...

A previous DBA who used to work here had written a script that took weekly snapshots of some databases for the purpose of creating development databases. The script, in pseudocode, did something like this:

stop mysqld instances
remove old /var/lib/mysql-dev
remove all relay logs from /var/lib/mysql
create lvm snapshot of /var/lib/mysql to /var/lib/mysql-dev
start up mysqld instances

The logic behind step 3 is: if we remove potentially many gigabytes of data before we snapshot, we save a bit of space. We aren't going to need those files on the development server later, and the main database can just re-download them from the master.

This reasoning is sound, but the reality is faulty and insidious.

The problem is that MySQL maintains its downloaded transaction state (aka, the status of the relay logs) in the file, not in the file. If you remove *relay*, you get rid of all of the transactions MySQL thinks it has, but you don't actually tell MySQL that they're gone. If your slave was behind at all, the database now starts skipping ahead until it finds a valid transaction that it can start applying... you therefore lose all of the changes that were contained in un-executed portions of the relay log files that you removed.

If you want to prove this to yourself, you can temporarily issue a SLAVE STOP SQL_THREAD and then cat several times. Watch the third line value increase -- that's the Read_Master_Log_Pos value from SHOW SLAVE STATUS. It continues to increment because MySQL is still writing data to the relay logs. If you then remove the relay logs, MySQL doesn't go back and re-download them. Ouch.

The only safe way to clone a slave is to either remove everything (bin logs, relay logs, all info files) and then re-slave it with CHANGE MASTER TO, or to copy over everything to the new location (and change the server id, of course).

This post brought to you by the "ah shit, really?" department.
zorkian: Icon full of binary ones and zeros in no pattern. (Default)
I'm trying to figure out what the state of the art is for so-called "reverse AJAX", "long polling", "Comet", etc. In a nutshell, I want to be able to have browsers open a connection to my server and then sit there and wait for data. Ideally, it will be a two-way connection that can pass data back and forth, but I'm OK with a one-way (server to browser) connection.

The downside of long polling is that you have to reconnect every time the server has something for you. This is fine in an environment where you're not getting a lot of data, but in the case that something is very popular (and generating a lot of message traffic), that's rather subpar.

Comet seems like it'd actually be the best. Using the Bayeux protocol, have a server and then client libraries, subscriptions, channels, etc. That sounds perfect. But it doesn't seem to really be under active development anywhere. At least, the only code I can find is from 2006-2008 and with how fast browsers and everything moves, I'm wary of things that aren't stamped 2009. (And even warier when I can't find much documentation or talk about something.)

I've also heard via Jesse about using Flash as a conduit. Which sounds like it'd do everything I want, but then you have to have this little Flash plugin on your page. Which is something that I think I'm against on principle, as I feel like JavaScript should be able to handle this functionality.

So: dear lazyweb, what is the state of the art here? What are people doing to solve problems like this and have amazingly interactive web sites, etc?
zorkian: Icon full of binary ones and zeros in no pattern. (Default)

I think it's pretty interesting that Microsoft would make this choice, but I don't think it's going to really change things. The people who know about Firefox, Opera, Safari, Chrome, etc and want to use them are already using them. The people who see this ballot box and don't know what they want are more than likely going to just tick the box that says "Microsoft" on it. After all, people go with what they know.

The more interesting part to this article is that Microsoft plans to release "confidential technical information" that can be used by other browser manufacturers to make the software better. Personally, I'm betting that the fine folks at the existing browser companies have already figured out a large part of this "confidential" information and it won't really change the market that much.

But it's interesting to look at, to see how things have changed over the years. Microsoft seems to be trying to do better, to be better, than they have been historically. Especially as more and more things start switching to be online, they're no longer the de-facto dominant player in the space and they have to work harder to be relevant. I think they'll pull it off, though, I have faith. (Says the man who runs Microsoft Vista at home.)


zorkian: Icon full of binary ones and zeros in no pattern. (Default)
Mark Smith

January 2017



RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Mar. 30th, 2017 06:45 am
Powered by Dreamwidth Studios