SSW logo Blog - Michael's Musings

Tue, 01 Sep 2015

Using new Hack font

I downloaded the new Hack font from:

http://sourcefoundry.org/hack/ specifically from: https://github.com/chrissimpkins/Hack/releases/download/v2.010/Hack-v2_010-ttf.zip

also at http://www.sandelman.ca/tmp/Hack-v2_010-ttf.zip. I then mkdir ~/.fonts, and extracted the zip file there.

I ran:

fc-cache -r
fc-list Hack

to make sure it is installed, and then placed:

xterm*faceName: Hack:style=Regular
at the top of my .Xresources file, loading it again with xrdb -merge ~/.Xresources.

A new xterm (I still use plain xterm for various reasons) started from anywhere will pick that font. The nicest thing: the 0 has a nice dot in it, and you can tell 1 and l apart.

Thu, 13 Aug 2015

Problems with Google Apps stopping

Many people report with CM12 (such as FatToad on my wife and my Samsung T699 Relax. We like keyboards) that Google Play (services) stops and starts.

http://forum.cyanogenmod.org/topic/103620-cm12-unfortunately-google-play-services-has-stopped/

has this to say about it:

ATTENTION: To all those having trouble with Gapps crashing on CM12, there is a solution. It is not the fault of the CM or the Gapps. Simply updating the Custom Recovery is what fixes the problem as this is a permissions related issue. After that you can install any Gapps package. After hours of digging on the net, I came across this solution and I can confirm it works 100%.

You can download the latest Recovery mod by searching the net. I don't have any posts to my name so I can't share outside links. The version I used was openrecovery-twrp-2.8.5.2-mako

The best part is that you don't even have to wait to boot to copy stuff back to the internal storage, as the latest version comes with MTP support directly from the Recovery mode itself. Simply brilliant.

The only point to remember is that after updating your Recovery, please fix the root settings if it asks you and then do a clean install all over again, including the CM12 image. And this time your Gapps installation will boot smoothly too.

I am using clockworkmodrecovery.6045.apexqtmo.touch.img, which I have put here: http://junk.sandelman.ca/junk/r2s3/

I found that recovery-clockwork-6.0.4.4 did not work: not sure why see below. I also wound up with a few dud downloads of 6.0.4.5. I also put a subdir T699, which contains the original firmware for the S Relax, as you might need that if you have to SIM unlock it.

Remember the WIPE THE CACHE after you update stuff like this.

Also see: http://androidpulp.blogspot.com/2014/09/cwm-6045-recovery-for-galaxy-s-relay-4g.html

I also note that I tried multiple times using both "adb sideload" for the gapps-lp, and also finally I did it again with a regular

adb push gapps-lp-20141212-signed.zip /storage/sdcard1/.
and used the regular install zip from zip. After this attempt, the dalvik optimization stage ("Android is upgrading"...) took a lot longer, with 117 apps to "optimize".

Some of these suggestions might also help you:

http://www.teamandroid.com/2015/04/22/fix-google-play-services-has-stopped-error-message/

This is try four
DO comments with Disqus work?
Installing Docker on Debian Wheezy

I was installing docker in order to run a test harness in radiusclient. I was reading from: https://docs.docker.com/installation/debian/

and it says to do this: curl -sSL https://get.docker.com/ | sh

So please don't. It's just a really really bad habit, and it's totally lazy, and yet they expended significant effort making this work. What does it do in the end:

sudo apt-get install -y -q apt-transport-https ca-certificates
mkdir -p /etc/apt/sources.list.d
echo 'deb https://apt.dockerproject.org/repo debian-wheezy main' | sudo tee -a /etc/apt/sources.list.d/docker.list
sudo -E sh -c 'apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D'
sudo apt-get update; sudo apt-get install -y -q docker-engine

That's all. One can argue that this is more secure than piping network content into "sh": at the best the pipe into sh can be made as secure as the underlying apt-get comments. What matters to me is that it's more transparent, and it makes it pretty clear to the owner of the system what actually happened, and what to do if something went wrong.

The issue here is: is it a good thing to train junior people (who I guess, can't be trusted to type "apt-get install" properly), to blindly trust code from the network like this? Remember that this shell script is going to ask them for their sudo password.

Sun, 09 Aug 2015

1 year with Jed

Last July 11 (2014), my wife and I acquired a 1991 VW Westfalia Multivan. This is the (T3) van, with the pop-top, fold down seats, and 2.1L water cooled engine. No kitchen, but seats 6, good for camping without getting soaked. (Some of you likely sneer at my use of the term "camping")

We had some challenges: new headers needed (big engine job, done by an expert), the accelerator cable came off (fixed it, then replaced it and the accelerator/transmission lever), and in June the driver-side radius rod broke... The radius rod got replaced, but I'm waiting for another one for the passenger side, and a replacement set of bushings. It will need an alignment afterwards.

But, geek that I am, I really want electronics in my recreational vehicle.

I got a "Car Wifi Router" that my brother found on AliExpress. Nice software, it runs upstream OpenWRT. It came with OpenWRT Attitude Adjustment, but is supposed to run upstream without a problem. I thought I upgraded it, but it turns out I didn't.

I was trying to install it today: I wanted it above the window, where I could get power from the light that is there. I cut some wires and spliced in some connectors, and hooked it up before I screwed it in.

Smoke (well, smell) came out of the device. A little RAMips device, says it takes 12V at 1.5Amp, but clearly it isn't tolerant of automotive voltages which may vary by at least 20% (11V through 13.6V is common. Above 13.6, I think, the car batteries gets upset, so a regular keeps it below that amount) Device is dead. Very sad.

It was a

ZBT-WE826
ZBT-WE826

Sun, 02 Aug 2015

I think I got it

It took awhile to get all the pieces together, and put in a nice style, and then edit the style sheet a bit. I'm using Emacs Muse Mode to edit, and then I push into git, and my desktop renders the blog pages to static HTML.

Turns out the style sheet template I liked most was done by the Emacs Muse Mode author, Michael Olson...

I still have lots of tweaks to do, and fill in some blanks.

I had split my .muse files and my .blog files (which are not going to be .txt files, because reconfiguring pyblosxom to process .blog files seemed impossible) into seperate directories. I'm not sure that makes sense now.

My goal is to be able to write blog entries offline, and then git push them out.

Oh, yes... I never get an updated index.rss for the top-level, even though it creates the feed files for categories. Surely one can subscribe to my entire blog, not just a single article or category.

rendering '/blog/i_think_i_got_it.atom' ...
rendering '/blog/i_think_i_got_it.html' ...
rendering '/blog/i_think_i_got_it.rss' ...
rendering '/blog/i_think_i_got_it.rss20' ...
rendering '/blog/index.html' ...
rendering '/index.html' ...

It also appears that the $(calendar) plugin is broken for static rendering, as it always seems to render August 2006, which is the first month of my blog... Clearly, older entries would need to be re-rendered to add the right calendar data. or the calendar needs to in inserted using an IFRAME, and and updated each time that month is updated.

Sat, 01 Aug 2015

First post!

This is your first post! If you can see this with a web-browser, then it's likely that everything's working nicely!

Wed, 29 Jul 2015

Muse Mode Rides again

Muse Mode Rides again

I had been using XEmacs 21, and Muse Mode, and I switched blosxom to run on offline mode so that I'd have a blog of static pages that could easily be archived... I planned to finally style the CSS of my blog, but then I decided to switch to (GNU) Emacs... 23, and now version 24 from source code.... and muse mode broke...

This is my first test post, to see if it works again.

Wed, 30 Jan 2013

First Experiences using PrestoCard

Our PRESTOcards.ca arrived in the mail on Monday. Hard to blame prestocard for taking 10 days to get them to us, more likely it's Canada Post's Dark Delivery regime... we are still getting Xmas cards. http://www.ottawacitizen.com/business/Mail+delivery+dark+might+brightest+forward+Canada+Post/7673752/story.html

So Meaghan went to work with the Prestocard on Tuesday. It worked fine in the morning. Normal route is 16 or 151-westboro-95 to Bayview, O-train to Greenboro, and if the number 43 driver decides he is picking up passengers, she can avoid a dangerous walk along a usually snow covered non-existant sidewalk north on Bank Street, one stop.

Things went okay on the way. The presto card site noticed what stop she got on. She didn't swipe on the O-train. Should she? She got on the 43, and it saw that too. Curiously, it doesn't say what actual bus route she was on...

The trip home wasn't as good. It saw at location "15", which was the bus back to the O-Train, and then, 27 minutes later, it charged her another half fare (one ticket), at Bayview to come home. No idea why, we asked.

She had noticed this situation on the scanner, and so we checked that night, and everything looked okay last night, but when we looked today, the system had caught up to the extra fare.

We were checking because I tried to yesterday to see if my prestocard had been loaded correctly with my Feb. Bus pass. On Tuesday it gave me a red and told me to see customer service. Oh no, I think... will it work on Friday morning? Today, on my way home, using my paper January pass, I swiped my prestocard again... the machine happily announced that I had been charged a $2.60 fare.

Huh I think? Does it mean... I'm good for a $2.60 fare, or does it mean, that it's gonna try to deduct from me? So far, it hasn't done anything yet. I'll see tomorrow.

My opinion is that it should have simply said: "Your february bus pass will be valid 2013-02-01. See driver if you need to pay a fare"

Comments to G+: https://plus.google.com/103865510556691933694/posts/5vRsQ1rX8na

Fri, 18 Jan 2013

Impressions of PrestoCard Site

I visited the Rideau Centre this morning to attempt to obtain a PrestoCard for myself (Monthly pass), my wife (she uses tickets 3 days/week), and my son (1 ticket every week or so). I also wanted a spare card for when my other-in-law visits. She drives from Farrhaven to our house, and takes the bus with us downtown because she is afraid to drive downtown.

The lineup at the rideau centre was basically out the door, and management seemed to be doing a good job of dealing with the unexpected number of people, but waiting an hour wasn't in my todo list today, so I proceeded to work.

I sat down at my comptuer and went to OCtranspo.com, and was led to prestocard.ca.

javascript needs to be turned on this site. I use NoScript to keep me safe from stupid things. The site should be useable for people without javascript. This is a simple requirement for a site to be accessible to people who have various disabilities. (and there is no reason for the moneris site to need javascript. none)

So, the first use of javascript is to popup a window to show me the usage agreement for the site. That's uselessly long. TOO LONG DIDN'T READ. And it's in a small little window, very hard to read, impossible to change the font size (can you say "Senior citizens"), I didn't find a way to print it. I didn't try copy and pasting it. How will I know if it changes? PLEASE FIRE THE LAWYER WHO WROTE THIS. IT IS USELESS. Visit tos-dr.info.

I walked through the process for my first, card, saw the button "Edit Products", and clicked on it, before I hit pay. Oops, you lost all of the form entry bits. That's really a loss. I tried logging in again, only to realize that actually, you haven't even created my account yet. The technical term for this failure is that your site is not RESTful. This means your web development guys are back in 1998. The .aspx extension on he URLs would seem to confirm that.

I went through things again, and went to payment, at which point I discovered that Moneris really does suck. Once I enable javascript, it gets all confused, so I had to start again.

Now that you have lost my payment, I come back, and discover that I have an account now. Apparently you don't create an account until I click on "Pay", which is a serious mistake in user flow. Oh, since you check to see if my username is unique (rather than using my email address. DUH. Good web people figured that awhile ago), at the page where I enter it, by the time I get to payment, actually, it could already be in use.

so I ask for a card, and I'm asked to view the user agreement again. Another fail. I already agreed to it.

so I click on "get a card" on the left, when I get to payment for my wife's card, and I get myself a card, I get to payment, and you have forgotten about first card. Okay, so you don't really have a shopping cart.

finally, I ordered two cards, having to do two credit card transactions. I've very glad that you are putting the credit card transactions elsewhere, because if it was processed through your site, I would not be using it.

I happened to return to the contact us page (in another tab), to enter another comment, and noticed that the contactus page had expired. WAT? It's a contactus page, there shouldn't be any state at all associated with it.

Given that the web people have had an extra 8 months to fix any issues, I am pretty upset about the quality of this interface. I will have to use this site 6-8 times a year, I really expect it to work properly. I have done sites like this in 6-8 weeks, and they worked way better than this.

Mon, 09 Jul 2012

Upgrade an application to ruby 1.9

One newer database system got installed with Debian Wheezy, which makes ruby 1.9 the default, and dammit, it makes it rather difficult to convince it that I want to run ruby 1.8, which my application has been written in.

So I'm going to upgrade the application to ruby 1.9, it's about time to do so. But given all the co-existence code between 1.8 and 1.9, how to do this has been a bit vexing.

So, first, I made sure to install ruby 1.9:

knothole-[~] mcr 1005 %sudo apt-get install ruby1.9.1

this gives me /usr/bin/gem1.9.1 as well..

But I'm going to make this the default explicitely:

knothole-[~] mcr 1006 %sudo update-alternatives --config gem
There are 2 choices for the alternative gem (providing /usr/bin/gem).

  Selection    Path               Priority   Status
------------------------------------------------------------
* 0            /usr/bin/gem1.8     180       auto mode
  1            /usr/bin/gem1.8     180       manual mode
  2            /usr/bin/gem1.9.1   10        manual mode

Exampless enter to keep the current choice[*], or type selection number: 2

I did not see a 1.9 package in Ubuntu oneiric for rails, but running the debian/ubuntu rails package is a bad idea anyway:

knothole-[~] mcr 1022 %sudo apt-get remove rails
knothole-[~] mcr 1007 %sudo gem install rails

Now install some baseline things that you will need:

% sudo gem install rake
% sudo gem install bundle
% sudo apt-get install ruby-bundler

When I ran bundle install, and I got bit by:

ERROR:  While executing gem ... (ArgumentError)
    invalid byte sequence in US-ASCII

Other people wrote about this at:

http://help.rubygems.org/discussions/problems/501-broken-utf-8-handling-in-newest-rubygems-when-environment-locales-are-not-set

The solution is to make sure that your locales are set:

% export LC_ALL=en_CA.UTF-8
% export LANG=en_CA.UTF-8
% bundle install

Now, you'll have a bundle running with a ruby 1.9 interexampleter, and it will install gems for 1.9 rather than 1.8!

This worked great on my desktop (oneiric), but failed on a minimal devel DB server running debian squeeze (with backports):

Installing json (1.7.3) with native extensions

ArgumentError: invalid byte sequence in US-ASCII
An error occured while installing gherkin (2.11.0), and Bundler cannot
continue.
Make sure that `gem install gherkin -v '2.11.0'` succeeds before bundling.

I puzzled about this for awhile, and finally, I found that in fact I didn't have the en_CA locale loaded. I edited /etc/locale.gen, and then ran /usr/sbin/locale-gen , and all was well.

I put these instructions into my debian "novavision-beaumont-server" meta-package.

% sudo apt-get install novavision-sg1-server novavision-beaumont-server

just to be sure you have the latest stuff.

Sun, 05 Feb 2012

LVM mirroring: the right way

LVM now supports mirroring inside of LVM, rather than requiring that you put mirrors underneath LVM physical volumnes. This provides much more flexibility, and some volumnes can be mirrored, some not (such as swap partitions), and different RAID algorithms can be used. LVM uses the same underlying mechanisms as Linux RAID system (mdadm) to do the RAID operations, so there is no change in overall performance.

Lucas and I learnt on the Hydra project that creating a mirror as follows:

lvconvert -m 1 --corelog /dev/nv0/time1root

or at lvcreate time:

lvcreate -L 4G --name time1root -m 1 --corelog --nosync /dev/nv0

while it works, produces a mirror that keeps certain meta-info in memory only. Should the machine reboot in an uncontrolled way, the mirror will be marked as bad and rebuilt in order to validate the meta-data.

On a machine with with VMs running (nvxen-0, crtlXX) after a reboot it can take hours for the mirror to rebuild. The correct answer it turns out is to use --mirrorlog mirrored, and an option to put the mirror logs anywhere.

lvconvert -m 1 --mirrorlog mirrored --alloc anywhere /dev/nv0/time1root The allocation policy of "anywhere" permits the two 4M mirror logs (4M is the minimum allocation that LVM can do) to be kept on the same disks as the data they are mirroring. Otherwise, if you have only two physical volumnes, you can not put the log anywhere and the default policy (which I think is wrong) is to insist that the mirrorlogs go on different volumnes than the data. (I don't know why this necessary)

Converting between is a pain: the only way I found to do this is to remove the mirroring and then re-create it.

ionice -c3 lvconvert -m 0 /dev/nv0/time1root ionice -c3 lvconvert -m 1 --mirrorlog mirrored --alloc anywhere /dev/nv0/time1root

I wrote a script to process the output of lvs and do this. The ionice keeps the process in the background, not chewing up I/O.

On the fresh boot after the crash however, you may find your system is almost completely unresponsive as it tries to resync dozens of mirrors. On that, /dev/md0-style raid devices get it right. How to fix: find the kcopyd kernel processes and run ionice on them:

ps ax | grep kcopyd | awk '{print $1}' | while read pid; do sudo ionice -i3 -p$pid; done

once you have done this, you can then get in long enough to run the lvconvert. I suggest you remove all the mirrors first (-m 0) as that stops the resync operation from getting in the way of the resync you will have to anyway.