Michael's musings

This is a blog of
mcr at sandelman.ca

Wed, 30 Jan 2013

First Experiences using PrestoCard

Our PRESTOcards.ca arrived in the mail on Monday. Hard to blame prestocard for taking 10 days to get them to us, more likely it's Canada Post's Dark Delivery regime... we are still getting Xmas cards. http://www.ottawacitizen.com/business/Mail+delivery+dark+might+brightest+forward+Canada+Post/7673752/story.html

So Meaghan went to work with the Prestocard on Tuesday. It worked fine in the morning. Normal route is 16 or 151-westboro-95 to Bayview, O-train to Greenboro, and if the number 43 driver decides he is picking up passengers, she can avoid a dangerous walk along a usually snow covered non-existant sidewalk north on Bank Street, one stop.

Things went okay on the way. The presto card site noticed what stop she got on. She didn't swipe on the O-train. Should she? She got on the 43, and it saw that too. Curiously, it doesn't say what actual bus route she was on...

The trip home wasn't as good. It saw at location "15", which was the bus back to the O-Train, and then, 27 minutes later, it charged her another half fare (one ticket), at Bayview to come home. No idea why, we asked.

She had noticed this situation on the scanner, and so we checked that night, and everything looked okay last night, but when we looked today, the system had caught up to the extra fare.

We were checking because I tried to yesterday to see if my prestocard had been loaded correctly with my Feb. Bus pass. On Tuesday it gave me a red and told me to see customer service. Oh no, I think... will it work on Friday morning? Today, on my way home, using my paper January pass, I swiped my prestocard again... the machine happily announced that I had been charged a $2.60 fare.

Huh I think? Does it mean... I'm good for a $2.60 fare, or does it mean, that it's gonna try to deduct from me? So far, it hasn't done anything yet. I'll see tomorrow.

My opinion is that it should have simply said: "Your february bus pass will be valid 2013-02-01. See driver if you need to pay a fare"

Comments to G+: https://plus.google.com/103865510556691933694/posts/5vRsQ1rX8na

posted at: 23:31 | path: /transit | permanent link to this entry

Fri, 18 Jan 2013

Impressions of PrestoCard Site

I visited the Rideau Centre this morning to attempt to obtain a PrestoCard for myself (Monthly pass), my wife (she uses tickets 3 days/week), and my son (1 ticket every week or so). I also wanted a spare card for when my other-in-law visits. She drives from Farrhaven to our house, and takes the bus with us downtown because she is afraid to drive downtown.

The lineup at the rideau centre was basically out the door, and management seemed to be doing a good job of dealing with the unexpected number of people, but waiting an hour wasn't in my todo list today, so I proceeded to work.

I sat down at my comptuer and went to OCtranspo.com, and was led to prestocard.ca.

javascript needs to be turned on this site. I use NoScript to keep me safe from stupid things. The site should be useable for people without javascript. This is a simple requirement for a site to be accessible to people who have various disabilities. (and there is no reason for the moneris site to need javascript. none)

So, the first use of javascript is to popup a window to show me the usage agreement for the site. That's uselessly long. TOO LONG DIDN'T READ. And it's in a small little window, very hard to read, impossible to change the font size (can you say "Senior citizens"), I didn't find a way to print it. I didn't try copy and pasting it. How will I know if it changes? PLEASE FIRE THE LAWYER WHO WROTE THIS. IT IS USELESS. Visit tos-dr.info.

I walked through the process for my first, card, saw the button "Edit Products", and clicked on it, before I hit pay. Oops, you lost all of the form entry bits. That's really a loss. I tried logging in again, only to realize that actually, you haven't even created my account yet. The technical term for this failure is that your site is not RESTful. This means your web development guys are back in 1998. The .aspx extension on he URLs would seem to confirm that.

I went through things again, and went to payment, at which point I discovered that Moneris really does suck. Once I enable javascript, it gets all confused, so I had to start again.

Now that you have lost my payment, I come back, and discover that I have an account now. Apparently you don't create an account until I click on "Pay", which is a serious mistake in user flow. Oh, since you check to see if my username is unique (rather than using my email address. DUH. Good web people figured that awhile ago), at the page where I enter it, by the time I get to payment, actually, it could already be in use.

so I ask for a card, and I'm asked to view the user agreement again. Another fail. I already agreed to it.

so I click on "get a card" on the left, when I get to payment for my wife's card, and I get myself a card, I get to payment, and you have forgotten about first card. Okay, so you don't really have a shopping cart.

finally, I ordered two cards, having to do two credit card transactions. I've very glad that you are putting the credit card transactions elsewhere, because if it was processed through your site, I would not be using it.

I happened to return to the contact us page (in another tab), to enter another comment, and noticed that the contactus page had expired. WAT? It's a contactus page, there shouldn't be any state at all associated with it.

Given that the web people have had an extra 8 months to fix any issues, I am pretty upset about the quality of this interface. I will have to use this site 6-8 times a year, I really expect it to work properly. I have done sites like this in 6-8 weeks, and they worked way better than this.

posted at: 11:50 | path: /transit | permanent link to this entry

Mon, 09 Jul 2012

Upgrade an application to ruby 1.9

One newer database system got installed with Debian Wheezy, which makes ruby 1.9 the default, and dammit, it makes it rather difficult to convince it that I want to run ruby 1.8, which my application has been written in.

So I'm going to upgrade the application to ruby 1.9, it's about time to do so. But given all the co-existence code between 1.8 and 1.9, how to do this has been a bit vexing.

So, first, I made sure to install ruby 1.9:

knothole-[~] mcr 1005 %sudo apt-get install ruby1.9.1

this gives me /usr/bin/gem1.9.1 as well..

But I'm going to make this the default explicitely:

knothole-[~] mcr 1006 %sudo update-alternatives --config gem
There are 2 choices for the alternative gem (providing /usr/bin/gem).

  Selection    Path               Priority   Status
* 0            /usr/bin/gem1.8     180       auto mode
  1            /usr/bin/gem1.8     180       manual mode
  2            /usr/bin/gem1.9.1   10        manual mode

Exampless enter to keep the current choice[*], or type selection number: 2

I did not see a 1.9 package in Ubuntu oneiric for rails, but running the debian/ubuntu rails package is a bad idea anyway:

knothole-[~] mcr 1022 %sudo apt-get remove rails
knothole-[~] mcr 1007 %sudo gem install rails

Now install some baseline things that you will need:

% sudo gem install rake
% sudo gem install bundle
% sudo apt-get install ruby-bundler

When I ran bundle install, and I got bit by:

ERROR:  While executing gem ... (ArgumentError)
    invalid byte sequence in US-ASCII

Other people wrote about this at:


The solution is to make sure that your locales are set:

% export LC_ALL=en_CA.UTF-8
% export LANG=en_CA.UTF-8
% bundle install

Now, you'll have a bundle running with a ruby 1.9 interexampleter, and it will install gems for 1.9 rather than 1.8!

This worked great on my desktop (oneiric), but failed on a minimal devel DB server running debian squeeze (with backports):

Installing json (1.7.3) with native extensions

ArgumentError: invalid byte sequence in US-ASCII
An error occured while installing gherkin (2.11.0), and Bundler cannot
Make sure that `gem install gherkin -v '2.11.0'` succeeds before bundling.

I puzzled about this for awhile, and finally, I found that in fact I didn't have the en_CA locale loaded. I edited /etc/locale.gen, and then ran /usr/sbin/locale-gen , and all was well.

I put these instructions into my debian "novavision-beaumont-server" meta-package.

% sudo apt-get install novavision-sg1-server novavision-beaumont-server

just to be sure you have the latest stuff.

posted at: 18:29 | path: /ruby-on-rails | permanent link to this entry

Sun, 05 Feb 2012

LVM mirroring: the right way

LVM now supports mirroring inside of LVM, rather than requiring that you put mirrors underneath LVM physical volumnes. This provides much more flexibility, and some volumnes can be mirrored, some not (such as swap partitions), and different RAID algorithms can be used. LVM uses the same underlying mechanisms as Linux RAID system (mdadm) to do the RAID operations, so there is no change in overall performance.

Lucas and I learnt on the Hydra project that creating a mirror as follows:

lvconvert -m 1 --corelog /dev/nv0/time1root

or at lvcreate time:

lvcreate -L 4G --name time1root -m 1 --corelog --nosync /dev/nv0

while it works, produces a mirror that keeps certain meta-info in memory only. Should the machine reboot in an uncontrolled way, the mirror will be marked as bad and rebuilt in order to validate the meta-data.

On a machine with with VMs running (nvxen-0, crtlXX) after a reboot it can take hours for the mirror to rebuild. The correct answer it turns out is to use --mirrorlog mirrored, and an option to put the mirror logs anywhere.

lvconvert -m 1 --mirrorlog mirrored --alloc anywhere /dev/nv0/time1root The allocation policy of "anywhere" permits the two 4M mirror logs (4M is the minimum allocation that LVM can do) to be kept on the same disks as the data they are mirroring. Otherwise, if you have only two physical volumnes, you can not put the log anywhere and the default policy (which I think is wrong) is to insist that the mirrorlogs go on different volumnes than the data. (I don't know why this necessary)

Converting between is a pain: the only way I found to do this is to remove the mirroring and then re-create it.

ionice -c3 lvconvert -m 0 /dev/nv0/time1root ionice -c3 lvconvert -m 1 --mirrorlog mirrored --alloc anywhere /dev/nv0/time1root

I wrote a script to process the output of lvs and do this. The ionice keeps the process in the background, not chewing up I/O.

On the fresh boot after the crash however, you may find your system is almost completely unresponsive as it tries to resync dozens of mirrors. On that, /dev/md0-style raid devices get it right. How to fix: find the kcopyd kernel processes and run ionice on them:

ps ax | grep kcopyd | awk '{print $1}' | while read pid; do sudo ionice -i3 -p$pid; done

once you have done this, you can then get in long enough to run the lvconvert. I suggest you remove all the mirrors first (-m 0) as that stops the resync operation from getting in the way of the resync you will have to anyway.

posted at: 12:19 | path: /lvm | permanent link to this entry

Thu, 01 Dec 2011

Active Scaffold obscures internal errors

In a newly scaffold'ed model and controller, created with ActiveScaffold 3.0.5, on rails 3.0.9, I was getting errors from the default created rspec code that I could not diagnose:

  1) Admin::ConnectionsController POST create with valid params creates a new Connection
     Failure/Error: post :create, :connection => valid_attributes
       You have a nil object when you didn't expect it!
       You might have expected an instance of Array.
       The error occurred while evaluating nil.each
     # ./spec/controllers/admin/connections_controller_spec.rb:54

Worse, these things were working just fine in RAILS_ENV=development.

Well, of course, it is failing on the line where the :create is invoked. But, where is the nil.each occuring?

I ran things with:

bundle exec rspec -d spec/controllers/admin/connections_controller_spec.rb \
   -e "POST create with valid params creates a new Connection"

after putting "debugger" in before the test case:

  describe "POST create" do
    describe "with valid params" do
      it "creates a new Connection" do
        # expect {
          post :create, :connection => valid_attributes
        #}.to change(Connection, :count).by(1)

(I'm still looking for a good ruby-debug mode that works like gdb-mode in Emacs works, that can show me the code around where I am...)

One winds up in the rescue in:


on line 19.

So, stick a breakpoint on the super there:

break /var/lib/gems/1.8/gems/actionpack-3.0.9/lib/action_controller/metal/rescue.rb:17

This lets you see the exception:

(rdb:1) p exception
#<NoMethodError: You have a nil object when you didn't expect it!
You might have expected an instance of Array.
The error occurred while evaluating nil.each>

The annoying part is that the action is invoked at /var/lib/gems/1.8/gems/actionpack-3.0.9/lib/action_controller/metal/instrumentation.rb:29

ActiveSupport::Notifications.instrument("process_action.action_controller", raw_payload) do |payload|

so, it evaluates code, and there are in fact one block passed to another block, and it seems really hard (a major ruby-debug limitation), that I can not put a breakpoint easily into the beginning of a block passed in.

I had to resort to editing that file, and sticking "debugger" in there!

Finally, one gets to:

send_action(method_name, *args)

In the debugger, the right thing to do is:

catch NoMethodError

This finally shows me that the failure is at:


Why? Because attributes is nil.

Why, because the generated controllers spec file says:

    describe "with valid params" do
      it "creates a new Connection" do
        expect {
          post :create, :connection => valid_attributes
        }.to change(Connection, :count).by(1)

should have been generated as:

    describe "with valid params" do
      it "creates a new Connection" do
        expect {
          post :create, :record => valid_attributes
        }.to change(Connection, :count).by(1)

posted at: 11:37 | path: /ruby-on-rails | permanent link to this entry

Mon, 08 Aug 2011

Domain Squatter Avoidance tool

Here is a nice use for a distributed hash table, backed by the new IETF REPUTE work.

I just typed "antipope.net" rather than antipope.org to get to Charlies Stross' web site. A squatter offered to sell me the domain. Some of the squatters do it solely for ad revenue, and I'd rather not arrange for them to get a dime.

I want a button for my browser (Chromium) which logs that name into a reputation database indicating that these guys are squatters, and letting me (once I know the correct name) enter the proper name. The same plugin will consult that database if I type something wrong, and suggest an alternative.

posted at: 09:24 | path: /internet | permanent link to this entry

Sun, 17 Jul 2011

Eclipse and Android SDK never ran

I've had a problem getting Eclipse, and specifically the Android SDK to run on my Debian laptop for over a year now. I've generally just VNC'ed to a more powerful box and ran it there.

The problem I had was that most network operations in eclipse would fail with network unreachable. Not a big deal for day to day things, but you need the network to install the Android SDK kits and install Eclipse plugins.

I had been trying to strace things to figure out what it was, and finally found it:

connect(26, {sa_family=AF_INET6, sin6_port=htons(443), inet_pton(AF_INET6, "::ffff:", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)

Huh, it's doing IPv6 connections. GOOD. But, it hasn't set the right IOCTL on the socket to permit IPv4 mapped connections to work, and on Debian, the bindv6only is now not set.

See: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=560056

posted at: 16:36 | path: /android | permanent link to this entry

Sun, 03 Jul 2011

"Over The Top" Television

re: http://crtc.gc.ca/eng/archive/2011/2011-344.htm

In Broadcasting Regulatory Policy 2009-329, the Commission set out the results of its review of broadcasting in new media. This was followed by Broadcasting Order 2009-660, which amended, clarified and affirmed the continued appropriateness of the New Media Exemption Order applied to new media broadcasting undertakings. Since then, there has been an acceleration of technological, market and consumer behaviour trends that may influence the Canadian broadcasting system's ability to achieve the policy objectives of the Broadcasting Act. Increasingly, programming is being provided by entities on multiple platforms and separate from the physical infrastructure over which it is delivered. These "over-the-top" entities are both foreign and domestic.

1. My name is Michael Richardson. I am chief scientist of Sandelman Software Works. I am writing today about your consultation about "Over-The-Top" television, such as "netflix" and others like this. I am a pioneer of the Internet, my use of it dates back to 1987. I am active participant in the Internet Engineering Task Force, and I've authored a number of RFCs in the security field.

2. I find the entire question about "over-the-top" to itself be indicative of a bias to begin with. My question was, over top of what? I get as much television "over" Canada Post as I do "over-the-air".

3. The Internet does not run on top of other things, more and more, other things run on top of the Internet. Neither incumbent cable or telephone companies have been competent enough to supply my home office for internet. My family that has tried them for Internet has found their service to be lacking, and have gone to reliable Internet suppliers, ones that are not vertically integrated and therefore do not have a bias against other things.

4. Since 1995, I have not subscribed to "cable" TV. I tried microwave (LOOK), but when I moved it was not available, and then I went to satellite (Star Choice, now Shaw). Since it became Shaw, my level of service has steadily declined, while my rates have gone up. My family uses the satellite TV less and less (we are now on the lowest tier subscription, primary for US Network channels) and relies on DVD delivery from ZIP and netflix over my bridged-DSL connection with Storm Internet.

5. Netflix has reported "problems" with Canadian residential internet connections. I have none. I do not use an incumbent telco with a competing service as my supplier. Please connect the dots.

6. I do not use "HD" services at this time, as I have no TVs like that. I consider current HD TV systems to be too inflexible and yet too complicated for my use. When the time comes, I will replace the "screens" in my home with dumb computer-grade displays, connected to media boxes running open standard systems.

7. The available content on Netflix leaves a lot to be desired. The amount in Canada, I'm told is much less than in the US due to licensing problems. This upsets me greatly: I would like to see a mandatory licensing regime that seperated who I choose to deliver the content I want, from what content is available.

8. Netflix offers a service that apparently permits some Apple and some Microsoft users to watch television their computers. This system uses a proprietary copyright infringing system to display the content. I say that it infringes the copyright laws because it appears that this "Digital Rights Management" system in fact denies me rights that I would have on other systems. This system is incompatible with non-Microsoft systems (tied selling) such as Ubuntu Linux that runs at my house.

9. We happen to have a Nintendo WII game console that has a netflix system for it, and I'm told that the Netflix application for it may also contain DRM. However, the output of my WII is a DRM-free analogue signal, and therefore my rights are identical with this system as they would be with broadcast television.

10. I am preparing myself for ATSC. I intend to put an antenna on my roof to receive US Network Channels from Rochester NY, and along with an ATSC tuner on each of my three TVs, I should be able to get Ottawa broadcast channels from Camp Fortune. At that point I will stop subscribing to satellite service: they have provided me with essentially no value.

11. At this point, what I would like is the ability to pay for the content that I want. I would like to be able to vote with my wallet, rather than have the CRTC tell me. I expect some service (such as Netflix, or a competitor) to offer to intermediate my transactions, reducing the cost of the transaction, and dealing the production studios directly.

12. I would like to:

a) provide a tip of approximately 0.25 for a show that I like. This would be voluntary by me. I would do this because I want them to produce more like it. I want to do this even for shows that might have been out of "print" for a long time, for instance Threes Company, or old episodes of Sesame Street, which continue to have significant value. Right now, at most, I can provide a "star" rating.

b) provide a bond (a promise) that I would tip for more episodes of a series that I like. This removes the role of the executives of i) the incument cable/satellite companies, ii) the specialty channels. who it seems continue to be reluctant to take risks, and have significantly disrupted shows with significant fan bases with very good writing. If this scares these companies, tough. The CRTC has no mandate to protect companies with out-dated business plans.

c) provide a tip to a "network" such as CBCKids who might provide me with a playlist of shows to watch and timely interactive ways to engage kids. Note I would be tipping for the playlist (a list of recommendations) not for the shows themselves.

13. This is particularly important to me for children's shows, as I will only let me child watch the TV stations that do not feature advertising.

End of Document

posted at: 18:10 | path: /crtc | permanent link to this entry

Mon, 09 May 2011

Problems (insecurities) in ActiveResource

I have an application that talks to Redmine/Chiliproject using its API with results in JSON. I use ActiveResource to make these calls, and it suddendly started failing after an upgrade from redmine to chiliproject:

ActiveRecord::UnknownAttributeError: unknown attribute: created_on

The fact that I was getting an error from ActiveRecord and not ActiveResource was puzzling. My ActiveResource class was called ProjectResource. The thing that I was retrieving was a "project", and yes, I happened to have a model called "Project", which was a subclass of ActiveRecord.

Looking at the JSON results using curl:

marajade-[~/C/dracula/hourbank3] mcr 10293 %curl 'http://localhost:3100/projects/show/16?format=json&key=abcdAPIKEY09123456789'
{"project":{"description":"Voice and Video softphone system for Android, with SIP support.","updated_on":"2010/10/08 10:10:24-0400","identifier":"thomas-watson","homepage":"","name":"Thomas-Watson","created_on":"2009/08/23 12:21:38 -0400","id":16}}

and also in the debugger, at

(rdb:1) c
Breakpoint 1 at /var/lib/gems/1.8/gems/activeresource-3.0.4/lib/active_resource/base.rb:889
new(record).tap do |resource|
(rdb:1) p record
{"project"=>{"name"=>"Thomas-Watson", "created_on"=>"2009/08/23 12:21:38 -0400", "id"=>16, "updated_on"=>"2010/10/08 10:10:24 -0400", "homepage"=>"", "description"=>"Voice and Video softphone system for Android, with SIP support.", "identifier"=>"thomas-watson"}}

what happens next is that the word "project" is passed to


and this finds and returns the "Project" class which is in my model. My model does not have a field created_on, thus the error.

So there three problems with this behaviour:

additions to the API should not break my old code, I should just ignore them.

there is no guarantee that the class that was found, "Project" has any of the behaviour that I need in the thing returned from ActiveResource.

worst, since the word "project" came from the remote system, the remote system could pick any class it wanted and invoke code on it. It's a reverse attack by a server on a client, but it's wrong to assume that the server is fully trusted by the client.

I'm not sure what the easiest way to fix this, but it's certainly wrong, and it's been there awhile in ActiveResource.

posted at: 15:14 | path: /ruby-on-rails | permanent link to this entry

Sun, 24 Apr 2011

A novel way to do PBX extensions

At CREDIL we are expanding our Asterisk out to service the entire floor. We didn't do our extensions particularly efficiently (numberwise), and I was thinking about ways to do them.

A really (math) geeky way occured to me: give employee number n the n+2'th prime (1-first prime, 2-second prime, first employee gets extension 3).

Then, if you need to have a conference call with employees number 4, 6 and 9, then you need to dial their product. Primes are 1,2,3,5,7,11,13,17,19,23,29,31,.. 4+2 = 6th prime is 11, 6+2=8th prime is 17, and 9+2=11th prime is 29. So dial 11*17*29 = 5423.

Primes are still in the 4 digits for the first 1000.


All multiples of your extension are yours to do anything you want with, and since the multiples times powers of 2 are never conference bridges, you have a lot of bits you can use to encode useful things. Want to call me and avoid ringing me? Okay, set bit number 2. Want to call me and never go to voice mail? Okay, set bit number 3... etc.

posted at: 15:15 | path: /xkcd | permanent link to this entry

Thu, 21 Apr 2011

Time for a new Monarch

To Her Majesty Her Majesty Elizabeth the Second,

by the Grace of God, of Great Britain, Ireland and the British Dominions beyond the Seas Queen, Defender of the Faith, Duchess of Edinburgh, Countess of Merioneth, Baroness Greenwich, Duke of Lancaster, Lord of Mann, Duke of Normandy, Sovereign of the Most Honourable Order of the Garter, Sovereign of the Most Honourable Order of the Bath, Sovereign of the Most Ancient and Most Noble Order of the Thistle, Sovereign of the Most Illustrious Order of Saint Patrick, Sovereign of the Most Distinguished Order of Saint Michael and Saint George, Sovereign of the Most Excellent Order of the British Empire, Sovereign of the Distinguished Service Order, Sovereign of the Imperial Service Order, Sovereign of the Most Exalted Order of the Star of India, Sovereign of the Most Eminent Order of the Indian Empire, Sovereign of the Order of British India, Sovereign of the Indian Order of Merit, Sovereign of the Order of Burma, Sovereign of the Royal Order of Victoria and Albert, Sovereign of the Royal Family Order of King Edward VII, Sovereign of the Order of Merit, Sovereign of the Order of the Companions of Honour, Sovereign of the Royal Victorian Order, Sovereign of the Most Venerable Order of the Hospital of St John of Jerusalem

(see http://en.wikipedia.org/wiki/List_of_titles_and_honours_of_Queen_Elizabeth_II )

On this, Our Birthday, where I turn 40, and you are still more than twice my age, and likely four times my wisdom, I wanted to share some thoughts I have had over the last few years.

I am your direct subject, having been born in London, as as well as your loyal subject in the "British Dominions beyond the Seas". I'm actually a fan of having a monarch, which is rather unpopular these days. I even met Your Highness once when you visited Fredericton, but I actually too little to know enough to be impressed.

First, congradulations on celebrating the marriage of your grandson. I know that things will go well next week, and we look forward his visit to Ottawa this summer.

I am sure that you have given a lot of thought to succession. I wondered if you had considered that Prince William would very nice King. A very nice Young King, one who could rally the youth of today, and bring a unity that politicians yearn for, but have seldom delivered.

Does Prince Charles actually want to be King? Perhaps after a brief Honeymoon, you and Prince Charles might consider abdicating in favour of Prince William.

I suggest sometime in 2012, maybe Feb. 29 would auspicious, or maybe April 21, 2012. I don't know: I am sure you will come up with something sensible.

posted at: 10:40 | path: /governance | permanent link to this entry

Thu, 17 Mar 2011

Dreamhost SSL certificates --- insecure

Dreamhost sells third-level GeoTrust SSL security certificates for $15/year. (You have to be an existing customer).

It seems however, they do not give you the chance to upload a CSR file. Instead, you are expected to fill out the DN information online, and then they generate a private key for you. And they keep the private key around in their database.

It also winds up in your browser cache, and if you have kind of a "trusted" SSL proxy between you and the Internet (like half of corporate users have), then it's gonna be in the cache of that device too.

This is a FAIL. Not only is your private key subject to whatever insecurity their might have, but it's total FBI Patriot Act fodder.

(If there is some place to upload a CSR, we couldn't find it)

posted at: 14:14 | path: /security | permanent link to this entry