Førstesiden Bli medlem Kontakt Informasjon Medlemsfordeler Utvalg Kalender NUUG/HIO prisen Dokumenter Innmelding Ressurser Mailinglister Wiki Linker Om de aktive Kart NUUG i media Planet NUUG webmaster@nuug.no
Powered by Planet! Last updated: June 26, 2017 03:31 PM

Planet NUUG

June 20, 2017

Ole Aamot Gnome Development Blog

GNOME Internet Radio Locator 0.3.0 for GNOME 3

GNOME Internet Radio Locator 0.3.0 for GNOME 3 is now available.

GNOME Internet Radio Locator 0.3.0 for GNOME 3

You can download the gnome-internet-radio-locator 0.3.0 development tree from https://git.gnome.org/gnome-internet-radio-locator


Debian GNU/Linux unstable i386

Fedora 25 x86_64

Ubuntu 17.04 amd64

This release is built on GTK+ 3.0, GNOME Maps, libchamplain and gstreamer (gst-player).

Enjoy Free Internet Radio.

by oleaamot atJune 20, 2017 02:01 PM

June 13, 2017

Peter Hansteen (That Grumpy BSD Guy)

Forcing the password gropers through a smaller hole with OpenBSD's PF queues

While preparing material for the upcoming BSDCan PF and networking tutorial, I realized that the pop3 gropers were actually not much fun to watch anymore. So I used the traffic shaping features of my OpenBSD firewall to let the miscreants inflict some pain on themselves. Watching logs became fun again.

Yes, in between a number of other things I am currently in the process of creating material for new and hopefully better PF and networking session.

I've been fishing for suggestions for topics to include in the tutorials on relevant mailing lists, and one suggestion that keeps coming up (even though it's actually covered in the existling slides as well as The Book of PF) is using traffic shaping features to punish undesirable activity, such as

Idea for pf tutorial: throttling of http abusers using pf and altq. /cc @pitrh @stucchimax
— Dan Langille (@DLangille) April 16, 2017

What Dan had in mind here may very well end up in the new slides, but in the meantime I will show you how to punish abusers of essentially any service with the tools at hand in your OpenBSD firewall.

Regular readers will know that I'm responsible for maintaining a set of mail services including a pop3 service, and that our site sees pretty much round-the-clock attempts at logging on to that service with user names that come mainly from the local part of the spamtrap addresses that are part of the system to produce our hourly list of greytrapped IP addresses.

But do not let yourself be distracted by this bizarre collection of items that I've maintained and described in earlier columns. The actual useful parts of this article follow - take this as a walkthrough of how to mitigate a wide range of threats and annoyances.

First, analyze the behavior that you want to defend against. In our case that's fairly obvious: We have a service that's getting a volume of unwanted traffic, and looking at our logs the attempts come fairly quickly with a number of repeated attempts from each source address. This similar enough to both the traditional ssh bruteforce attacks and for that matter to Dan's website scenario that we can reuse some of the same techniques in all of the configurations.

I've written about the rapid-fire ssh bruteforce attacks and their mitigation before (and of course it's in The Book of PF) as well as the slower kind where those techniques actually come up short. The traditional approach to ssh bruteforcers has been to simply block their traffic, and the state-tracking features of PF let you set up overload criteria that add the source addresses to the table that holds the addresses you want to block.

I have rules much like the ones in the example in place where there I have a SSH service running, and those bruteforce tables are never totally empty.

For the system that runs our pop3 service, we also have a PF ruleset in place with queues for traffic shaping. For some odd reason that ruleset is fairly close to the HFSC traffic shaper example in The Book of PF, and it contains a queue that I set up mainly as an experiment to annoy spammers (as in, the ones that are already for one reason or the other blacklisted by our spamd).

The queue is defined like this:

   queue spamd parent rootq bandwidth 1K min 0K max 1K qlimit 300

yes, that's right. A queue with a maximum throughput of 1 kilobit per second. I have been warned that this is small enough that the code may be unable to strictly enforce that limit due to the timer resolution in the HFSC code. But that didn't keep me from trying.

And now that I had another group of hosts that I wanted to just be a little evil to, why not let the password gropers and the spammers share the same small patch of bandwidth?

Now a few small additions to the ruleset are needed for the good to put the evil to the task. We start with a table to hold the addresses we want to mess with. Actually, I'll add two, for reasons that will become clear later:

table <longterm> persist counters
table <popflooders> persist counters 

The rules that use those tables are:

block drop log (all) quick from <longterm> 

pass in quick log (all) on egress proto tcp from <popflooders> to port pop3 flags S/SA keep state \ 
(max-src-conn 2, max-src-conn-rate 3/3, overload <longterm> flush global, pflow) set queue spamd 

pass in log (all) on egress proto tcp to port pop3 flags S/SA keep state \ 
(max-src-conn 5, max-src-conn-rate 6/3, overload <popflooders> flush global, pflow) 
The last one lets anybody connect to the pop3 service, but any one source address can have only open five simultaneous connections and at a rate of six over three seconds.

Any source that trips up one of these restrictions is overloaded into the popflooders table, the flush global part means any existing connections that source has are terminated, and when they get to try again, they will instead match the quick rule that assigns the new traffic to the 1 kilobyte queue.

The quick rule here has even stricter limits on the number of allowed simultaneous connections, and this time any breach will lead to membership of the longterm table and the block drop treatment.

The for the longterm table I already had in place a four week expiry (see man pfctl for detail on how to do that), and I haven't gotten around to deciding what, if any, expiry I will set up for the popflooders.

The results were immediately visible. Monitoring the queues using pfctl -vvsq shows the tiny queue works as expected:

 queue spamd parent rootq bandwidth 1K, max 1K qlimit 300
  [ pkts:     196136  bytes:   12157940  dropped pkts: 398350 bytes: 24692564 ]
  [ qlength: 300/300 ]
  [ measured:     2.0 packets/s, 999.13 b/s ]

and looking at the pop3 daemon's log entries, a typical encounter looks like this:

Apr 19 22:39:33 skapet spop3d[44875]: connect from
Apr 19 22:39:33 skapet spop3d[75112]: connect from
Apr 19 22:39:34 skapet spop3d[57116]: connect from
Apr 19 22:39:34 skapet spop3d[65982]: connect from
Apr 19 22:39:34 skapet spop3d[58964]: connect from
Apr 19 22:40:34 skapet spop3d[12410]: autologout time elapsed -
Apr 19 22:40:34 skapet spop3d[63573]: autologout time elapsed -
Apr 19 22:40:34 skapet spop3d[76113]: autologout time elapsed -
Apr 19 22:40:34 skapet spop3d[23524]: autologout time elapsed -
Apr 19 22:40:34 skapet spop3d[16916]: autologout time elapsed -

here the miscreant comes in way too fast and only manages to get five connections going before they're shunted to the tiny queue to fight it out with known spammers for a share of bandwidth.

I've been running with this particular setup since Monday evening around 20:00 CEST, and by late Wednesday evening the number of entries in the popflooders table had reached approximately 300.

I will decide on an expiry policy at some point, I promise. In fact, I welcome your input on what the expiry period should be.

One important takeaway from this, and possibly the most important point of this article, is that it does not take a lot of imagination to retool this setup to watch for and protect against undesirable activity directed at essentially any network service.

You pick the service and the ports it uses, then figure out what are the parameters that determine what is acceptable behavior. Once you have those parameters defined, you can choose to assign to a minimal queue like in this example, block outright, redirect to something unpleasant or even pass with a low probability.

All of those possibilities are part of the normal pf.conf toolset on your OpenBSD system. If you want, you can supplement these mechanisms with a bit of log file parsing that produces output suitable for feeding to pfctl to add to the table of miscreants. The only limits are, as always, the limits of your imagination (and possibly your programming abilities). If you're wondering why I like OpenBSD so much, you can find at least a partial answer in my OpenBSD and you presentation.

FreeBSD users will be pleased to know that something similar is possible on their systems too, only substituting the legacy ALTQ traffic shaping with its somewhat arcane syntax for the modern queues rules in this article.

Will you be attending our PF and networking session in Ottawa, or will you want to attend one elsewhere later? Please let us know at the email address in the tutorial description.

Update 2017-04-23: A truly unexpiring table, and downloadable datasets made available

Soon after publishing this article I realized that what I had written could easily be taken as a promise to keep a collection of POP3 gropers' IP addresses around indefinitely, in a table where the entries never expire.

Table entries do not expire unless you use a pfctl(8) command like the ones mentioned in the book and other resources I referenced earlier in the article, but on the other hand table entries will not survive a reboot either unless you arrange to have table contents stored to somewhere more permanent and restored from there. Fortunately our favorite toolset has a feature that implements at least the restoring part.

Changing the table definition quoted earler to read

 table <popflooders> persist counters file "/var/tmp/popflooders"

takes part of the restoring, and the backing up is a matter of setting up a cron(8) job to dump current contents of the table to the file that will be loaded into the table at ruleset load.

Then today I made another tiny change and made the data available for download. The popflooders table is dumped at five past every full hour to pop3gropers.txt, a file desiged to be read by anything that takes a list of IP addresses and ignores lines starting with the # comment character. I am sure you can think of suitable applications.

In addition, the same script does a verbose dump, including table statistiscs for each entry, to pop3gropers_full.txt for readers who are interested in such things as when an entry was created and how much traffic those hosts produced, keeping in mind that those hosts are not actually blocked here, only subjected to a tiny bandwidth.

As it says in the comment at the top of both files, you may use the data as you please for your own purposes, for any re-publishing or integration into other data sets please contact me via the means listed in the bsdly.net whois record.

As usual I will answer any reasonable requests for further data such as log files, but do not expect prompt service and keep in mind that I am usually in the Central European time zone (CEST at the moment).

I suppose we should see this as a tiny, incremental evolution of the "Cybercrime Robot Torture As A Service" (CRTAAS) concept.

Update 2017-04-29: While the world was not looking, I supplemented the IP address dumps with versions including one with geoiplocation data added and a per country summary based on the geoiplocation data.

Spending a few minutes with an IP address dump like the one described here and whois data is a useful excersise for anyone investigating incidents of this type. This .csv file is based on the 2017-04-29T1105 dump (preserved for reference), and reveals that not only is the majority of attempts from one country but also a very limited number of organizations within that country are responsible for the most active networks.

The spammer blacklist (see this post for background) was of course ripe for the same treatment, so now in addition to the familiar blacklist, that too comes with a geoiplocation annotated version and a per country summary.

Note that all of those files except the .csv file with whois data are products of automatic processes. Please contact me (the email address in the files works) if you have any questions or concerns.

Update 2017-05-17: After running with the autofilling tables for approximately a month, and I must confess, extracting bad login attempts that didn't actually trigger the overload at semi-random but roughly daily intervals, I thought I'd check a few things about the catch. I already knew roughly how many hosts total, but how many were contactin us via IPv6? Let's see:

[Wed May 17 19:38:02] peter@skapet:~$ doas pfctl -t popflooders -T show | wc -l
[Wed May 17 19:38:42] peter@skapet:~$ doas pfctl -t popflooders -T show | grep -c \:

Meaning, that of a total 5239 miscreant trapped, only 77, or just sort of 1.5 per ent tried contacting us via IPv6. The cybercriminals, or at least the literal bottom feeders like the pop3 password gropers, are still behind the times in a number of ways.

Update 2017-06-13: BSDCan 2017 past, and the PF and networking tutorial with OpenBSD session had 19 people signed up for it. We made the slides available on the net here during the presentation and announced them on Twitter and elsewhere just after the session concluded. The revised tutorial was fairly well received, and it is likely that we will be offering roughly equivalent but not identical sessions at future BSD events or other occasions as demand dictates. 

by Peter N. M. Hansteen (noreply@blogger.com) atJune 13, 2017 07:30 AM

June 12, 2017

Petter Reinholdtsen

Updated sales number for my Free Culture paper editions

It is pleasing to see that the work we put down in publishing new editions of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig, is still being appreciated. I had a look at the latest sales numbers for the paper edition today. Not too impressive, but happy to see some buyers still exist. All the revenue from the books is sent to the Creative Commons Corporation, and they receive the largest cut if you buy directly from Lulu. Most books are sold via Amazon, with Ingram second and only a small fraction directly from Lulu. The ebook edition is available for free from Github.

Title / languageQuantity
2016 jan-jun2016 jul-dec2017 jan-may
Culture Libre / French 3 6 15
Fri kultur / Norwegian 7 1 0
Free Culture / English 14 27 16
Total 24 34 31

A bit sad to see the low sales number on the Norwegian edition, and a bit surprising the English edition still selling so well.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

June 12, 2017 09:40 AM

June 09, 2017

Petter Reinholdtsen

Release 0.1.1 of free software archive system Nikita announced

I am very happy to report that the Nikita Noark 5 core project tagged its second release today. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.1.1 since version 0.1.0 (from NEWS.md):

If this sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

June 09, 2017 10:40 PM

May 16, 2017

Peter Hansteen (That Grumpy BSD Guy)

The Hail Mary Cloud And The Lessons Learned

Against ridiculous odds and even after gaining some media focus, the botnet dubbed The Hail Mary Cloud apparently succeeded in staying under the radar and kept compromising Linux machines for several years. This article, based on my BSDCan 2013 talk, sums up known facts about the botnet and suggests some common-sense measures to be taken going forward.

The Hail Mary Cloud was a widely distributed, low intensity password guessing botnet that targeted Secure Shell (ssh) servers on the public Internet.

The first activity may have been as early as 2007, but our first recorded data start in late 2008. Links to full data and extracts are included in this article.

We present the basic behavior and algorithms, and point to possible policies for staying safe(r) from similar present or future attacks.

But first, a few words about the devil we knew before the incidents that form the core of the narrative.

The Traditional SSH Bruteforce Attack

If you run an Internet-facing SSH service, you have seen something like this in your logs:

Sep 26 03:12:34 skapet sshd[25771]: Failed password for root from port 40992 ssh2
Sep 26 03:12:34 skapet sshd[5279]: Failed password for root from port 40992 ssh2
Sep 26 03:12:35 skapet sshd[5279]: Received disconnect from 11: Bye Bye
Sep 26 03:12:44 skapet sshd[29635]: Invalid user admin from
Sep 26 03:12:44 skapet sshd[24703]: input_userauth_request: invalid user admin
Sep 26 03:12:44 skapet sshd[24703]: Failed password for invalid user admin from port 41484 ssh2
Sep 26 03:12:44 skapet sshd[29635]: Failed password for invalid user admin from port 41484 ssh2
Sep 26 03:12:45 skapet sshd[24703]: Connection closed by
Sep 26 03:13:10 skapet sshd[11459]: Failed password for root from port 43344 ssh2

This is the classic, rapid-fire type of bruteforce attack, with rapid-fire login attempts from one source. (And yes, skapet is the Internet-facing host on my home network.)

The Likely Business Plan

These attempts are often preceded by a port scan, but in other cases it appears that the miscreants are just blasting away at random. In my experience, with the gateway usually at the lowest-numbered address, the activity usually turns up first there, before trying higher-numbered hosts. I'm not really in a mind to offer help or advice to the people running those scripts, but it might be possible to scan the internet from downwards next time. Anyway, looking at the log excerpts, miscreants' likely plan is
  1. Try for likely user names, hope for guessable password, keep guessing until successful.
  2. PROFIT!
But then the attempts usually come in faster than most of us can type, so with a little help from toolmakers, we came up with an inexpensive first line of defense, easily implemented in perimeter packet filters (aka firewalls).

Traditional Anti-Bruteforce Rules

Rapid-fire bruteforce attacks are easy to head off. I tend to use OpenBSD on internet facing hosts, so first we present the technique as it has been available in OpenBSD since version 3.5 (released in 2005), where state tracking options are used to set limits we later act on:

In your /etc/pf.conf, you add a table to store addresses, block access for all traffic coming from members of that table, and finally amend your typical pass rule with some state tracking options. The result looks something like this:

table <bruteforce> persist
block quick from <bruteforce>
pass inet proto tcp to $int_if:network port $tcp_services \
keep state (max-src-conn 100, max-src-conn-rate 15/5, \
overload <bruteforce> flush global)

Here, max-src-conn is the maximum number of concurrent connections allowed from one host

max-src-conn-rate is the maximum allowed rate of new connections, here 15 connections per 5 seconds.

overload <bruteforce> means that any hosts that exceed either of these limits are have their adress added to this table

and, just for good measure, flush global means that for host that is added to our overload table, we kill all existing connections too.

Basically, problem solved - the noise from rapid-fire bruteforcers generally disappears instantly or after a very few attempts. If you are about to implement something like this (and many do -- the bruteforcer section in my PF tutorial appears to be among the more popular ones), you probably need to watch your logs to find useful numbers for your site, and tweak rules accordingly. I have yet to meet an admin who plausibly claims to never have been tripped up by their overload rules at some point. That's when you learn to appreciate having an alternative way in to your systems, such as a separate admin network.

Traditional Anti-Bruteforce Rules, Linux Style

For those not yet converted to the fine OpenBSD toolset (available in FreeBSD and other BSDs too, with only minor if any variations in details for this particular context), the Linux equivalent would be something like

sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH
sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 5 \
--hitcount 15 --rttl --name SSH -j DROP

But be warned: this will still be minus the maximum number of connections limit, plus the usual iptables warts. And you'd need a separate set of commands for ip6tables.

It's likely something similar is doable with other tools and products too, including possibly some proprietary ones. I've made something of an effort to limit my exposure to the non-free tools, so I can't offer you any more detail. To find out what your present product can do, please dive into the documentation for whichever product you are using. Or come back for some further OpenBSD goodness.

But as you can see, for all practical purposes the rapid-fire bruteforce or floods problem has been solved with trivial configuration tweaks.

But then something happened.

What's That? Something New!

On November 19th, 2008 (or shortly thereafter), I noticed this in my authentication logs:

Nov 19 15:04:22 rosalita sshd[40232]: error: PAM: authentication error for illegal user alias from s514.nxs.nl
Nov 19 15:07:32 rosalita sshd[40239]: error: PAM: authentication error for illegal user alias from c90678d3.static.spo.virtua.com.br
Nov 19 15:10:20 rosalita sshd[40247]: error: PAM: authentication error for illegal user alias from 207-47-162-126.prna.static.sasknet.sk.ca
Nov 19 15:13:46 rosalita sshd[40268]: error: PAM: authentication error for illegal user alias from 125-236-218-109.adsl.xtra.co.nz
Nov 19 15:16:29 rosalita sshd[40275]: error: PAM: authentication error for illegal user alias from
Nov 19 15:19:12 rosalita sshd[40279]: error: PAM: authentication error for illegal user alias from
Nov 19 15:22:29 rosalita sshd[40298]: error: PAM: authentication error for illegal user alias from
Nov 19 15:25:14 rosalita sshd[40305]: error: PAM: authentication error for illegal user alias from 130.red-80-37-213.staticip.rima-tde.net
Nov 19 15:28:23 rosalita sshd[40309]: error: PAM: authentication error for illegal user alias from 70-46-140-187.orl.fdn.com
Nov 19 15:31:17 rosalita sshd[40316]: error: PAM: authentication error for illegal user alias from gate-dialog-simet.jgora.dialog.net.pl
Nov 19 15:34:18 rosalita sshd[40334]: error: PAM: authentication error for illegal user alias from
Nov 19 15:37:23 rosalita sshd[40342]: error: PAM: authentication error for illegal user alias from
Nov 19 15:40:20 rosalita sshd[40350]: error: PAM: authentication error for illegal user alias from 70-46-140-187.orl.fdn.com
Nov 19 15:43:39 rosalita sshd[40354]: error: PAM: authentication error for illegal user alias from
Nov 19 15:46:41 rosalita sshd[40374]: error: PAM: authentication error for illegal user amanda from
Nov 19 15:49:31 rosalita sshd[40378]: error: PAM: authentication error for illegal user amanda from host116-164.dissent.birch.net
Nov 19 15:55:47 rosalita sshd[40408]: error: PAM: authentication error for illegal user amanda from robert71.lnk.telstra.net
Nov 19 15:59:08 rosalita sshd[40412]: error: PAM: authentication error for illegal user amanda from static-71-166-159-177.washdc.east.verizon.net

... and so on. The alphabetic progression of user names went on and on.

The pattern seemed to be that several hosts, in widely different networks, try to access our system as the same user, up to minutes apart. When any one host comes back it's more likely than not several user names later. The full sequence (it stopped December 30th), is available here.

Take a few minutes to browse the log data if you like. It's worth noting that rosalita was a server that had a limited set of functions for a limited set of users, and basically no other users than myself ever logged in there via SSH, even if they for various reason had the option open to them. So in contrast to busier sites where sequences like this might have drowned in the noise, here it really stood out. And I suppose after looking at the data, you can understand my initial reaction.

The Initial Reaction

My initial reaction was pure disbelief.

For the first few days I tried tweaking PF rules, playing with the attempts/second values and scratching my head, going, "How do I make this match?"

I spent way too much time on that, and the short version of the answer to that question is, you can't. With the simple and in fact quite elegant state tracking options, you will soon hit limits (especially time limits) that interfere with normal use, and you end up blocking legitimate traffic.

So I gave up on prevention (which really only would have rid me of a bit of noise in my authentication logs), and I started analyzing the data instead, trying to eyeball patterns that would explain what I was seeing. After a while it dawned on me that this could very well be a coordinated effort, using a widely distributed set of compromised hosts.

So there was a bit of reason in there after all. Maybe even a business plan or model. Next, I started analyzing my data, and came up with -

Bruteforcer Business Plan, Distributed Version

The Executive Summary would run something like this: Have more hosts take turns, round robin-ish, at long enough intervals to stay under the radar, guessing for weak passwords.

The plan is much like before, but now we have more host on the attacking side, so
  1. Pick a host from our pool, assign a user name and password (picked from a list, dictionary or pool)
  2. For each host,
    1. Try logging in to the chosen target with the assigned user name and password
    2. If successful, report back to base (we theorize); else wait for instructions (again we speculate)
  3. Go to 1).
  4. For each success at 2.2), PROFIT!

You're The Target

Let's recap, and take step back. What have we learned?

To my mind at least, it all boils down to the basics
At this point I thought I had something useful, so I started my first writeup for publication. I had just started a new job at the time, and I think I mentioned the oddities to some of my new colleagues (that company is unfortunately defunct, but the original linked articles give some information). Anyway, I wrote and published, hoping to generate a little public attention for myself and my employer. And who knows, maybe even move a few more copies of that book I'd written the year before.

Initial Public Reaction

On December 2, 2008, I published the first blog post in what would become a longish sequence, A low intensity, distributed bruteforce attempt, where I summarized my findings. It's slightly more wordy than this piece, but if I've piqued your interest so far, please go ahead and read. And as to a little public attention, I got my wish. The post ended up slashdotted, the first among my colleagues to end up with their name on the front page of Slashdot.

That brought

The slow bruteforcers were not getting in, so I just went on collecting data. I estimated they'd be going on well past new year's if they were going to reach the end of the alphabet.

On December 30th, 2008, The Attempts Stopped

The attempts came to an end, conveniently while I was away on vacation. The last entries were:

Dec 30 11:03:08 rosalita sshd[51108]: error: PAM: authentication error for illegal user sophia from
Dec 30 11:05:08 filehut sshd[54932]: error: PAM: authentication error for illegal user sophia from
Dec 30 11:06:35 rosalita sshd[51116]: error: PAM: authentication error for illegal user sophia from static-98-119-110-139.lsanca.dsl-w.verizon.net
Dec 30 11:09:03 filehut sshd[54981]: error: PAM: authentication error for illegal user sophia from static-98-119-110-139.lsanca.dsl-w.verizon.net

That is, not even completing a full alphabetic cycle.

By then they had made 29916 attempts, all failed. You can find the full listing at here).

Trying 6100 user IDs (list by frequency here). More than likely you can guess the top one without even looking.

From a total of 1193 different hosts (list by frequency here).

As I said earlier, there were no successful penetrations. Zero.

Common characteristics

The slashdot story brought comments and feedback, with some observations from other sites. Not a lot of data, but enough that the patterns we had observed were confirmed. The attempts were all password authentication attempts, no other authentication methods attempted.

For the most part the extended incident consisted of attempts on an alphabetic sequence of 'likely' user names, but all sites also saw at least one long run of root only attempts. This pattern was to repeat itself, and also show up in data from other sources.

There would be anything from seconds to minutes between attempts, but attempts from any single host would come at much longer intervals.

First Round Observations, Early Conclusions

Summing up what we had so far, here are a few observations and attempts at early conclusions.

At the site where I had registered the distributed attempts, the Internet-reachable machines all ran either OpenBSD or FreeBSD. Only two FreeBSD boxes were contacted.

The attackers were hungry for root, so having PermitRootLogin no in our sshd config anywhere Internet facing proved to be a good idea.

We hadn't forced our users to keys only, but a bit of luck and John the Ripper (/usr/ports/security/john) saved our behinds.

The number of attempts per user name had decreased over time (as illustrated by this graph), so we speculated in the second article Into a new year, slowly pounding the gates (on slashdot as The Slow Bruteforce Botnet(s) May Be Learning) that success or not was measured at a command and control site, with resources allocated accordingly.

With the sequence not completed, we thought they'd given up. After all, the odds against succeeding seemed monumental.

After all, a couple of slashdotted blog posts couldn't have hurt, could they?

But Of Course They Came Back

As luck would have it, whoever was out there had not totally admitted defeat just yet. In the early hours CET, April 7th, 2009, the slow brutes showed up again:

Apr  7 05:02:07 rosalita sshd[4739]: error: PAM: authentication error for root from ruth.globalcon.net
Apr 7 05:02:15 rosalita sshd[4742]: error: PAM: authentication error for root from ip-206-83-192-201.sterlingnetwork.net
Apr 7 05:02:54 rosalita sshd[4746]: error: PAM: authentication error for root from cyscorpions.com
Apr 7 05:02:59 rosalita sshd[4745]: error: PAM: authentication error for root from smtp.bancomorada.com.br
Apr 7 05:03:10 rosalita sshd[4751]: error: PAM: authentication error for root from
Apr 7 05:03:25 rosalita sshd[4754]: error: PAM: authentication error for root from
Apr 7 05:03:52 rosalita sshd[4757]: error: PAM: authentication error for root from rainha.florianonet.com.br
Apr 7 05:04:00 rosalita sshd[4760]: error: PAM: authentication error for root from
Apr 7 05:04:34 rosalita sshd[4763]: error: PAM: authentication error for root from s1.serverhex.com
Apr 7 05:04:38 rosalita sshd[4765]: error: PAM: authentication error for root from mail.pitnet.com.br

Starting with 2318 attempts at root before moving on to admin and proceeding with the alphabetic sequence. The incident played out pretty much like the previous one, only this time I was sure I had managed to capture all relevant data before my logs were rotated out of existence.

The data is available in the following forms: Full log here, one line per attempt here, users by frequency here, hosts by frequency here.

I couldn't resist kicking up some more publicity, and indeed we got another slashdot storm out of the article The slow brute zombies are back, on slashdot as The Low Intensity Brute-Force Zombies Are Back.

And shortly afterwards, we learned something new -

Introducing dt_ssh5, Linux /tmp Resident

Of course there was a piece of malware involved.

A Linux binary called dt_ssh5 did the grunt work.

The dt_ssh5 file was found installed in /tmp on affected systems. The reason our perpetrators chose to target that directory is likely because the /tmp directory tends to be world readable and world writeable.

Again, this points us to the three basic lessons:
  1. Stay away from guessable passwords
  2. Watch for weird files (stuff you didn't put there yourself) anywhere in your file system, even in /tmp.
  3. Internalize the fact that PermitRootLogin yes is a bad idea.

dt_ssh5: Basic Algorithm

The discovery of dt_ssh5 made for a more complete picture. A rough algorithm suggested itself:

  1. Pick a new host from our pool, assign a user name and password
  2. For each host,
    1. Try user name and password
    2. if successful
      1. drop the dt_ssh5 binary in /tmp; start it
      2. report back to base
      else wait for instructions
  3. Go to 1.
  4. For each success at 2.2, PROFIT!

I never got myself a copy, so the actual mechanism for communicating back to base remains unclear.

The Waves We Saw, 2008 - 2012

We saw eight sequences (complete list of articles in the References section at the end),

From - To AttemptsUser IDsHostsSuccessful Logins
2008-11-19 15:04:22 - 2008-12-30 11:09:0329916610011930
2009-04-07 03:56:25 - 2009-04-12 21:01:371264124911040
2009-09-30 21:15:36 - 2009-10-15 13:42:079998110710
2009-10-28 23:58:35 - 2010-01-22 09:56:2444513811041580
2010-06-17 01:55:34 - 2010-08-11 13:23:0123014388755680
2011-10-23 04:13:00 - 2011-10-29 05:40:0747739443380
2011-11-03 20:56:18 - 2011-11-26 17:42:19490724742520
2012-04-01 12:33:04 - 2012-04-06 14:52:1147571081230

The 2009-09-30 sequence was notable for trying only root, the 2012-04-01 sequence for being the first to attempt access to OpenBSD hosts.

We may have missed earlier sequences, early reports place the first similar attempts as far back as 2007.

For A While, The Botnet Grew

From our point of view, the swarm stayed away for a while and came back stronger, for a couple of iterations, possibly after tweaking their code in the meantime. Or rather, the gaps in our data represent times when it focused elsewhere.

Clearly, not everybody was listening to online rants about guessable passwords.

For a while, the distributed approach appeared to be working.

It was (of course) during a growth period I coined the phrase "The Hail Mary Cloud".

Instantly, a myriad of "Hail Mary" experts joined the insta-punditry on slashdot and elsewhere.

It Went Away Or Dwindled

Between August 2010 and October 2010, things either started going badly for The Hail Mary Cloud, or possibly they focused elsewhere.

I went on collecting data.

There wasn't much to write about, except possibly that the botnet's command and control was redistributing effort based on past success. Aiming at crackable hosts elsewhere.

And Resurfaced In China?

Our last sighting so far was in April 2012. The data is preserved here.

This was the first time we saw Hail Mary Cloud style attempts at accessing OpenBSD systems.

The majority of attempts were spaced at at least 10 seconds apart, and until I revisited the data recently, I thought only two hosts in China were involved.

In fact, 23 hosts made a total of 4757 attempts at 1081 user IDs, netting 0 successful logins.

The new frequency data I thought interesting enough to write about, so I wrote up If We Go One Attempt Every Ten Seconds, We're Under The Radar, and netted another slashdotting. I took another look at the data later and slightly amended the conclusions, the article has been corrected with proper data extracted.

Then What To Do?

The question anybody reading this far will be asking is, what should we do in order to avoid compromise by the password guessing swarms? To my mind, it all boils down to common sense systems administration:

Mind your logs. You can read them yourself, or train a robot to. I use logsentry, other monitoring tools can be taught to look for anomalies (failed logins, etc)

Keep your system up to date. If not OpenBSD, check openssh.com for the latest version, check what your system has and badger the maintainer if it's outdated.

And of course, configure your applications such as sshd properly -

sshd_config: 'PermitRootLogin no' and a few other items

These two settings in your sshd_config will give you the most bang for the buck:

PermitRootLogin no
PasswordAuthentication no

Make your users generate keys, add the *.pub to their ~/.ssh/authorized_keys files.

For a bit of background, Michael W. Lucas: SSH Mastery (Tilted Windmill Press 2013) is a recent and very readable guide to configuring your SSH (server and clients) sensibly. It's compact and affordable too.

Keep Them Out, Keep Them Guessing

At this point, most geeks would wax lyrical about the relative strengths of different encryption schemes and algorithms.

Being a simpler mind, I prefer a different metric for how good your scheme is, or effectivness of obfuscation (also see entropy):

How many bytes does a would-be intruder have to get exactly right?
I've summed up the answer to that question in this table:

Authentication methodNumber of bytes
PasswordPassword length (varies, how long is yours?)
Alternate PortPort number (2 bytes, it's a 16 bit value, remember)
Port KnockingNumber of ports in sequence * 2 (still a 16 bit value)
Single Packet Authentication2 bytes (the port) plus Max 1440 (IPv4/Ethernet) or 1220 (IPv6/Ethernet)
Key OnlyNumber of bytes in key (depending on key strength, up to several kB)

You can of course combine several methods (with endless potential for annoying your users), or use two factor authentication (OpenSSH supports several schemes).

Keys. You've Got To have Keys!

By far the most effective measure is to go keys only for your ssh logins. In your sshd_config, add or uncomment

PasswordAuthentication no

Restart you sshd, and have all users generate keys, like this:

$ ssh-keygen -C "userid@domain.tld"

There are other options to play with, see ssh-keygen(1) for inspiration.

Then add the *.pub to their ~/.ssh/authorized_keys files.

And I'll let you in on a dirty little secret: you can even match on interface in your sshd config for things like these

Why Not Use Port Knocking?

Whenever I mention the Hail Mary Cloud online, two suggestions always turn up: The iptables example I mentioned earlier (or link to the relevant slide), and "Why not use port knocking?". Well, consider this:

Port knocking usually means having all ports closed, but with a daemon reading your firewalls logs for a predetermined sequence of ports. Knock on the correct ports in sequence, your're in.

Another dirty little secret: It's possible to implement port knocking with only the tools in an OpenBSD base system. No, I won't tell you how.

Executive Summary: Don't let this keep you from keeping your system up to date.

To my mind port knocking gives you:
  1. Added complexity or, one more thing that will go wrong. If the daemon dies, you've bricked your system.
  2. An additional password that's hard to change. There's nothing magical about TCP/UDP ports. It's a 16 bit number, and in our context, it's just another alphabet. The swarm will keep guessing. And it's likely the knock sequence (aka password) is the same for all users.
  3. You won't recognize an attack until it succeeds, if even then. Guessing attempts will be indistinguishable from random noise (try a raw tcpdump of any internet-facing interface to see the white noise you mostly block drop anyway), so you will have no early warning.
Port knocking proponents seem to have sort of moved on to single packet authentication instead, but even those implementations still contain the old port knocking code intact.

If you want a longer form or those arguments, my November 4, 2012 rant Why Not Use Port Knocking? was my take (with some inaccuracies, but you'll live).

There's No Safety In High Ports Anymore

Another favorite suggestion is to set your sshd to listen on some alternate port instead of the default port 22/TCP.

People who did so have had a few years of quiet logs, but recent reports show that whoever is out there have the resources to scan alternate ports too.

Once again, don't let running your sshd on an alternate port keep you from keeping your system up to date.

Of course I've ranted about this too, in February 2013, There's No Protection In High Ports Anymore. If Indeed There Ever Was. (which earned me another slashdotting).

Reports with logs trickle in from time to time of such activity at alternate ports, but of course on any site with a default deny packet filtering policy will not see any traces of such scans unless you go looking specifically at the mass of traffic that gets dropped at the perimeter.

Final thoughts, for now

Microsoftish instapundits were quick to assert that ssh is insecure.

They're wrong. OpenSSH (which is what essentially everyone uses) is maintained as an integral part of the OpenBSD project, and as such is a very thoroughly audited mass of code. And it keeps improving with every release.

I consider the Hail Mary Cloud an example of distributed, parallel problem solving, conceptually much like SETI@Home but with different logic and of course a more sinister intent.

Computing power is cheap now, getting cheaper, and even more so when you can leverage other people's spare cycles.

The huge swarm of attackers concept is as I understand it being re-used in the recent WordPress attacks. We should be prepared for swarm attacks on other applications as soon as they reach a critical mass of users.

There may not be a bullseye on your back yet (have you looked lately?), but you are an attractive target.

Fortunately, sane system administration practices will go a long way towards thwarting intrusion attempts, as in
Keep it simple, stay safe.

UPDATE 2013-11-21: A recent ACM Conference on Computer and Communication Security paper, "Detecting stealthy, distributed SSH brute-forcing," penned by Mobin Javed and Vern Paxson, references a large subset of the data and offers some real analysis, including correlation with data from other sites (Spoiler alert: in some waves, almost total overlap of participating machines). One interesting point from the paper is that apparently attack matching our profile were seen at the Lawrence Berkeley National Laboratory as early as 2005.

And in other news, it appears that GitHub has been subject to an attack that matches the characteristics we have described. A number of accounts with weak passwords were cracked. Investigations appears to be still ongoing. Fortunately, GitHub appear to have started offering other authentication methods.

UPDATE 2014-09-28: Since early July 2014, we have been seeing similar activity aimed at our POP3 service, with usernames taken almost exclusively from our spamtrap list. The article Password Gropers Take the Spamtrap Bait has all the details and log data as well as references to the spamtrap list.

UPDATE 2014-12-10: My Passwords14 presentation, Distributed, Stealthy Brute Force Password Guessing Attempts - Slicing and Dicing Data from Recent Incidents has some further data as well as further slicing and dicing of the earlier data (with slightly different results). 

UPDATE 2016-08-10: The POP3 gropers never went away entirely and soon faded into a kind of background noise. In June of 2016, however, they appeared to have hired themselves out to a systematic hunt for Chinese user names. The article Chinese Hunting Chinese over POP3 in Fjord Country has further details, and as always, links to log data and related files.


The slides for the talk this article is based on live at http://home.nuug.no/~peter/hailmary2013/, with a zipped version including all data at http://home.nuug.no/~peter/hailmary2013.zip (approx. 26MB) for your convenience.

Mobin Javed and Vern Paxson, "Detecting stealthy, distributed SSH brute-forcing," ACM International Conference on Computer and Communication Security (CCS), November 2013.

The blog posts (field notes) of the various incidents, data links within:

Peter N. M. Hansteen, (2008-12-02) A low intensity, distributed bruteforce attempt (slashdotted)

Peter N. M. Hansteen, (2008-12-06) A Small Update About The Slow Brutes

Peter N. M. Hansteen, (2008-12-21) Into a new year, slowly pounding the gates (slashdotted)

Peter N. M. Hansteen, (2009-01-22) The slow brutes, a final roundup

Peter N. M. Hansteen, (2009-04-12) The slow brute zombies are back (slashdotted)

Peter N. M. Hansteen, (2009-10-04) A Third Time, Uncharmed (slashdotted)

Peter N. M. Hansteen, (2009-11-15) Rickrolled? Get Ready for the Hail Mary Cloud! (slashdotted)

Peter N. M. Hansteen, (2011-10-23) You're Doing It Wrong, Or, The Return Of The Son Of The Hail Mary Cloud

Peter N. M. Hansteen, (2011-10-29) You're Doing It Wrong, Returning Scoundrels

Peter N. M. Hansteen, (2012-04-06) If We Go One Attempt Every Ten Seconds, We're Under The Radar (slashdotted)

Peter N. M. Hansteen, (2012-04-11) Why Not Use Port Knocking?

Peter N. M. Hansteen, (2013-02-16) There's No Protection In High Ports Anymore. If Indeed There Ever Was. (slashdotted)

Other Useful Texts

Marcus Ranum: The Six Dumbest Ideas in Computer Security, September 1, 2005

Michael W. Lucas: SSH Mastery, Tilted Windmill Press 2013 (order direct from the OpenBSD bookstore here)

Michael W. Lucas: Absolute OpenBSD, 2nd edition No Starch Press 2013 (order direct from the OpenBSD bookstore here)

Peter N. M. Hansteen, The Book of PF, 3rd edition, No Starch Press 2014, also the online PF tutorial it grew out of, several formats http://home.nuug.no/~peter/pf/, more extensive slides matching the most recent session at http://home.nuug.no/~peter/pf/newest/

OpenBSDs web http://www.openbsd.org/ -- lots of useful information.

If you enjoyed this: Support OpenBSD!

If you have enjoyed reading this, please buy OpenBSD CDs and other items, and/or donate!

Useful links for this are:

OpenBSD.org Orders Page: http://www.openbsd.org/orders.html

OpenBSD Donations Page: http://www.openbsd.org/donations.html.

OpenBSD Hardware Wanted Page: http://www.openbsd.org/want.html.

Remember: Free software takes real work and real money to develop and maintain.

If you want to support me, buy the book! (if you want to give the OpenBSD project a cut of that, this is the link you want).

by Peter N. M. Hansteen (noreply@blogger.com) atMay 16, 2017 11:32 AM

May 10, 2017

NUUG news

Nok en seier for NUUG, EFN og IMC AS i DNS-beslagsaken!

Vi har akkurat fått vite at ØKOKRIM besluttet å oppheve beslag i regnskapene til IMC AS, som fungerte som registrar for domenenavnet popcorn-time.no. Vi mistenker opphevelsen er foranlediget av følgende uttalelse i lagmannsrettens kjennelse:

«Beslaget i Internet Marketing Consult AS' regnskaper er hjemlet i straffeprosessloven § 203 første ledd første punktum, altså begrunnet i at de har betydning som bevis. Det som er beslaglagt er papirkopier av regnskapet og utskrifter fra regnskapssystemet av utgående fakturaer.

Tingretten uttaler om begrunnelsen for å opprettholde beslaget, og om forholdsmessigheten:

Ifølge Økokrim inneholder dokumentene informasjon om hvem som har bestilt domenenavnet. Retten legger dette til grunn, og det antas at dokumentene vil ha betydning som bevis i den senere straffesaken, jf. straffeprosessloven § 203 første ledd første punktum. Inngrepet er ikke ubegrunnet eller uforholdsmessig, jf. straffeprosessloven § 170a. Det vises blant annet til at IMC har tilgang til sine regnskaper elektronisk og kan be om papirkopi ved en henvendelse til politiet.

Tingretten foretar her ingen selvstendig vurdering av om det beslaglagte er egnet til å tjene som bevis som anført. Det synes åpenbart at utgående fakturaer vil kunne tjene som bevis for hvem som er eller har vært kunder, og dermed benyttet domenenavnet popcorn-time.no. Men for å foreta en forsvarlig vurdering av beslaget, er det nødvendig å vurdere de ulike deler av beslaget, herunder om øvrig regnskapsmateriale tjener dette formål. I og med at beslaget i regnskapet ble tatt 11. mars 2016, mens tingrettens kjennelse er avsagt 3. februar 2017, må det også fremkomme hvorfor det fortsatt er nødvendig å opprettholde beslaget. Både omfanget og tidsaspektet er også av vesentlig betydning for forholdsmessighetsvurderingen etter straffeprosessloven 170 a.»

Dette var virkelig gledelige nyheter.

May 10, 2017 09:25 AM

May 06, 2017

NUUG news

Pressemelding: Popcorn-time.no - NUUG og EFN vinner frem med sin anke i popcorn-time.no saken

Det er vanskelig å lese beslutningen(17-041336SAK-BORG/04) som noe annet enn en komplett seier, selv om dette bare er et trinn på stigen. Lagmannsretten kritiserer tingretten for å ikke tilstrekkelig behandle sakene og instruerer tingretten om å behandle følgende saker mer grundig.

"Kampen for frihet på Internet fortsetter med uforminsket engasjement!" sier NUUG-leder Hans-Petter Fjeld.

"Vi er dypt skuffet over tingrettens slette håndtverk og gleder oss over at lagmannsretten er enig med oss." - sier NUUG-leder Hans-Petter Fjeld


Økokrim tok i mars 2016 beslag i domenet popcorn-time.no. Det ble tatt beslag i domenet og det ble blant annet benyttet ransakelse av styreleder i ISPen som var i besittelse av domenet, der deres regnskap ble beslaglagt. Etter NUUG og EFNs syn er beslag av domener noe som kun kan skje etter dom, grunnet Grunnlovens forbud mot forhåndssensur.


NUUG er en ikke-kommersiell forening som arbeider for spredning av UNIX-lignende systemer, fri programvare og åpne standarder i Norge. Foreningen blev dannet i 1984 med det formål å øke interessen for bruk av UNIX, og å stimulere til utveksling av informasjon og erfaring mellom brukerer. NUUG er den norske greina av den internasjonale organisasjonen USENIX («The Advanced Computing Systems Association»). NUUG har i dag 267 personlige medlemmer og 42 firmamedlemmer.


EFN er en digital menneskerettighetsorganisasjon. Sentralt i dette arbeidet har de senere årene vært arbeide rettet mot sensur, overvåking og misbruk av håndhevelse av opphavsrettslovgivningen på internett. En viktig del av dette arbeidet er ytringsfrihet i IT-samfunnet, herunder særlig ytringsfrihet på internett. EFN er en del av den europeiske rettighetsorganisasjonen EDRi. EFN har i dag ca. 535 medlemmer. bestående av både personlige og ikke-personlige medlemmer.


NUUG-leder Hans-Petter Fjeld
epost sekretariat (at) nuug.no.
tlf +47 95728209


May 06, 2017 04:30 PM

April 19, 2017

Holder de ord

BETALT JOBB: Kategorisering av valgløfter

Interessert i politikk? Er du student som ønsker å tjene noen ekstra slanter ved siden av studiene?

Holder de ord er en partipolitisk uavhengig organisasjon med mål om å gjøre det enklere å følge med på norsk stortingspolitikk. Blant tjenestene vi tilbyr er en fullstendig løftedatabase som per i dag består av alle løftene fra de åtte stortingspartienes partiprogram for periodene 2009-2013 og 2013-2017.

I forbindelse med valgåret 2017 skal databasen oppdateres med løftene til alle partiene som er representert på Stortinget. Vårens siste landsmøte avsluttes 21. mai 2017. Da vil alle åtte parti på Stortinget ha vedtatt nye program for perioden 2017-2021. Totalt vil det erfaringsmessig da ha kommet om lag 7.000 nye løfter. Alle disse løftene skal inn i Holder de ords løftedatabase.


Dette er en manuell jobb. Hvert enkelt løfte i partiprogrammene kopieres over i et excel-ark og gis kategorier etter Stortingets kategorisystem. Noe omskrivningsarbeid må påregnes slik at løftene kan stå på egne ben, uten nødvendig kontekst. Dette importeres så til Holder de ords nettdatabase.

I tillegg innebærer jobben å grovsortere aktuelle løfter til bruk i Holder de ords chat bot. Egne retningslinjer for disse vil bli gitt. Denne delen av jobben vil ikke påvirke den totale arbeidsmengden.


Det inngås kontrakt hvor det avtales lønn per partiprogram. I utgangspunktet ser vi gjerne at den som påtar seg oppdraget kategoriserer alle eller flere enn to partiprogram. Ved for sen levering kan dagbøter påløpe.

Send en kort søknad som epost til Tiina Ruohonen og Hanna Tranås merket “Valgløfter” i emnefeltet.

by Hanna Tranås (hanna@holderdeord.no) atApril 19, 2017 07:57 PM

March 16, 2017

Ole Aamot Gnome Development Blog

GUADEC 2017 in Manchester

Yesterday I booked housing and conference tickets for GUADEC 2017 in Manchester, so in July I am flying from Oslo to Manchester, meet other Free Software hackers from the GNOME Project, and fly back to Oslo via Malaga. It will be my only trip this year.

by oleaamot atMarch 16, 2017 07:02 PM

February 14, 2017

Holder de ord

Stortingsregjereri på høyt nivå

Fersk stortingsmelding viser at antall anmodningsvedtak har skutt i været. Meldingen viser en markant økning i antall saker som stortinget nå pålegger regjeringen å sette i verk.

Fredag 10. februar ble stortingsmelding nr. 17 (2016-2017) om anmodnings- og utredningsvedtak for forrige stortingssesjon overlevert Stortinget. Stortingsmeldingen er fremmet årlig siden 2000 og inneholder samtlige anmodningsvedtak som ble vedtatt av Stortinget i den foregående stortingssesjonen.

I stortingssesjonen 2015-2016 ble det fremmet 477 anmodningsvedtak, inkludert underpunktene som følger noen av anmodningsvedtakene. Dette er det høyeste antallet anmodningsvedtak som noen gang er fremmet i en stortingssesjon, og nesten dobbelt så mange som ble fremmet under den siste toppen i stortingssesjonen 2002-2003 (247 anmodningsvedtak). Den gang førte det høye antallet anmodningsvedtak til en debatt om stortingsregjereri.

Et høyt antall anmodningsvedtak kan gi utfordringer i ansvarsfordelingen mellom den lovgivende og utøvende makten. Disse utfordringene ble også diskutert ved årtusenskiftet, hvor Frøilandsutvalgets rapport fra 2002 inngikk i debatten om stortingsregjereri.


Stortinget utøver sin instruksjonsrett overfor regjeringen blant annet gjennom anmodningsvedtak. Disse begynner med formuleringen «Stortinget ber regjeringen…». Dette må ikke forveksles med en høflig anmodning, men er et konstitusjonelt bindende pålegg fra Stortinget til regjeringen. Ofte er det snakk om at Stortinget ber regjeringen om å utrede noe eller etablere et tiltak. Ofte kommer det også pålegg om å kontrollere noe. Mange ganger er anmodningene imidlertid av forvaltningsmessig karakter, noe som reiser spørsmål om anmodningsvedtak blir misbrukt.

Anmodningsvedtak er gyldige utover stortingsperioden. En minister som med overlegg ikke følger opp anmodningsvedtak kan stilles for riksrett og straffes med inntil fem års fengsel. Manglende oppfølging har aldri ført til riksrett, men det viser at det ligger sterke sanksjonsmuligheter bak Stortingets instruksjonsrett.

Etter 2003 sank antallet anmodningsvedtak betraktelig. Nedgangen ble godt hjulpet av at den rødgrønne regjeringen var en flertallsregjering. Fra 2005 til 2013 lå antallet anmodningsvedtak på mellom 7 og 33 per sesjon.

Anmodningsvedtak tar også opp mer av stortingets taletid. Holder de ord sitt referatsøk «Sagt i salen» viser også en kraftig økning i antallet innlegg som omtaler anmodningsvedtak. Bruken av «stortingsregjereri» har imidlertid ikke nådd det samme nivået som da begrepet sist var på moten ved årtusenskiftet.

Hvor ofte snakker Stortinget om anmodningsvedtak og stortingsregjereri?


Da Høyre og Fremskrittspartiet dannet mindretallsregjering i 2013 steg antallet anmodningsvedtak. Det var en spesielt kraftig økning fra i perioden 2014-16. Økningen må ses i sammenheng med det høye antallet asylsøkere som kom til Norge i 2015. Regjeringen inngikk flere asylforlik med Stortinget der oppfølgingspunktene defineres som anmodningsvedtak. Justis- og beredskapsdepartementet forvalter også da en tredjedel av anmodningsvedtakene som ble fattet i sesjonen 2015-2016. Av de om lag 168 anmodningsvedtakene fulgt opp av departementet sorterer 131 under innvandrings- og integreringsministeren.

Anmodningsvedtak - instruksjonsrett eller instruksjonsiver?

Sentrale utfordringer knyttet til antallet anmodningsvedtak er Stortingets tillit til regjeringen, kompetansen den lovgivende makten har til å fatte fornuftige vedtak, regjeringens uavhengighet og Stortingets kontrollfunksjon av regjeringen.

Da Stortinget i 2004 debatterte det høye antallet anmodningsvedtak uttalte stortingspresident Jørgen Kosmo (AP) at antallet har «tatt helt av» og dette dette «forrykker balansen mellom storting og regjering». Inge Lønning (H) argumenterte for at «det skal ikke utøves makt uten at den som utøver makten, også har ansvaret for maktutøvelsen», men der konsekvensen av «et økende antall mer og mindre forpliktende anmodningsvedtak er at dette grunnprinsippet pulveriseres».

Mange anmodningsvedtak vil si at Stortinget i stor grad instruerer regjeringen. Dersom regjeringen har Stortingets tillit kunne en tenke seg at det ikke er nødvendig å instruere regjeringen i en serie saker. I denne stortingsperioden finnes det imidlertid flere eksempler på at Stortinget instruerer regjeringen på saker som i utgangspunktet burde være uproblematiske å overlate til regjeringen.


Vedtak nr. 159, 9. desember 2015: Stortinget ber regjeringen følge opp at den forskriftspålagte plikta om alltid å vurdere barnets familie eller nære nettverk som mulig fosterheim ved omsorgsovertaking, blir praktisert.

En skulle i utgangspunktet tro at der er overflødig av Stortinget å instruere regjeringen til å følge opp vedtatt regelverk. Det samme gjelder løfter presentert i Sundvolden-erklæringen, regjeringens politiske plattform der blant annet karbonfangst- og lagring er omtalt:

«Regjeringen vil satse bredt på å utvikle en kostnadseffektiv teknologi for fangst og lagring av CO2, og ha en ambisjon om å realisere minst ett fullskala demonstrasjonsanlegg for CO2-fangst innen 2020.»

Likevel instruerer Stortinget regjeringen til å følge opp sin egen politiske plattform:

Vedtak nr. 685, 23. mai 2016: Stortinget ber regjeringen sikre realisering av minst ett CCS-anlegg for å bidra til at Norge når sitt nasjonale klimamål for 2020.

I et annet tilfelle instruerer Stortinget regjeringen på bevæpning av politiet i strid med løfter fra Sundvolden-erklæringen:

«Regjeringen vil åpne for generell bevæpning i de politidistrikter der politiet selv mener det er den beste løsningen.»

Vedtak nr 522, 5. mai 2015: Stortinget ber regjeringen opprettholde dagens bevæpningspraksis med et ubevæpnet politi. Dette påvirker ikke tillatelsen til bevæpning etter våpeninstruksen i særskilte situasjoner.

169 representanter vs. 21 000 byråkrater

Stortinget består av 169 representanter som er fordelt på 13 komiteer. Hver komité følger opp store samfunnsområder som, helt eller delvis, dekkes av flere departementer. I departementene og direktoratene er det 21 000 ansatte (2015). Der embetsverket består av fast ansatte som arbeider på spesialiserte felt over lengre tid, består Stortinget av representanter som velges for fire år av gangen og som ofte har skiftende ansvarsområder.

Gjennom embetsverket har forvaltningen en historikk på hvordan politikkområder styres og følges opp. Videre er forvaltningen underlagt forvaltningsloven med krav til utredning og habilitet som blant annet skal sikre rettferdig og transparent saksbehandling. Stortinget er ikke underlagt et slikt regelverk når den utøver sin instruksjonsrett.

Frøilandutvalget omtaler alle anmodningsvedtakene som ble vedtatt i sesjonene 1999-2000 og 2000-2001 som «en tankevekker»:

«En del av dem var rene anmodninger om videre utredning av saker, som Stortinget ønsket å få nærmere belyst. Men det gjenstår likevel en stor kategori, der Stortinget kan hevdes å ha grepet inn med bindende pålegg i enkeltsaker av til dels detaljert og forvaltningspreget karakter, på en måte som reiser spørsmål om dette er hensiktsmessig og forsvarlig.»

Dersom Stortinget i økende grad instruerer regjeringen i spesifikke saker øker sannsynligheten for at det fattes vedtak hvis implementering kan ha utilsiktede, negative konsekvenser. Løsrevne vedtak tilknyttet enkeltsaker gjør det også mer sannsynlig at Stortinget over tid vil fatte vedtak der tilsvarende saker håndteres på ulik måte.

Detaljstyring av regjeringen

Anmodningsvedtak medfører også ofte en kostnad som reduserer regjeringens handlingsrom. Regjeringen bør få anledning til å bestemme hvordan målene beskrevet i regjeringens forslag til statsbudsjett og komiteenes innstillinger skal oppfylles.


Helse- og omsorgsdepartementet forvalter en tilskuddsordning til aktivitet for eldre som skal «motvirke ensomhet, passivitet og sosial tilbaketrekning og å skape aktivitet, deltakelse, sosialt fellesskap og møteplasser». Stortinget bevilger midler til ordningen, men det er regjeringen som tildeler midler i henhold til tilskuddsregelverket og tilskuddets formål.

Denne balansen forrykkes når Stortinget griper inn i tildelingsprosessen:

Vedtak nr. 999, 17. juni 2016: Stortinget ber regjeringen sørge for at kriteriene for tilskuddsordningen Aktivitet for seniorer og eldre under kap. 761, post 21, endres slik at Tjukkasgjengen kan omfattes av ordningen.

Dette er et eksempel på et vedtak som legger konkrete føringer på hvordan regjeringen skal oppnå sine mål. Her gir Stortinget konkrete føringer på hvem som skal få midler over statsbudsjettet. Tilskuddsordningen skal nå dekke Tjukkasgjengen, i tillegg til andre tiltak.

Stortinget har i denne perioden også gått langt i å instruere regjeringen i spørsmål om intern samhandling.


Vedtak 707, 26. mai 2016: Stortinget ber regjeringen iverksette nødvendige tiltak for å sikre sømløs informasjonsflyt mellom forsvarsministeren, utenriksministeren og forsvarssjefen, slik at korrekt og oppdatert informasjon er tilgjengelig når slik informasjon er ønsket.

Mange av anmodningsvedtakene er utformet på en måte som gjør det vanskelig å avgjøre om de er fulgt opp eller ikke av regjeringen. I oppfølgingen av vedtak nr. 707 skriver Forsvarsdepartementet blant annet at de har etablert rutiner og møteserier mellom Forsvarsdepartementet og Utenriksdepartementet, der representanter for Forsvaret og Forsvarsmateriell deltar. Det er vanskelig for Stortinget å avgjøre om tiltakene er nok for å sikre sømløs informasjonsflyt.

Legitim instruksjonsmulighet

Stortingets mulighet til å instruere regjeringen, spesielt ved mindretallsregjeringer og i prinsipielle spørsmål, er legitim. Men det er bekymringsverdig med så mange anmodningsvedtak som vi nå ser. Utviklingen ble i 2002-03 beskrevet som dramatisk og eksplosiv, mens vi i dag har doblet antallet anmodninger. Mange og detaljerte anmodningsvedtak gir en rolleblanding mellom Stortinget og regjering.

Det er samtidig begrenset hvor mange saker stortingsrepresentantene kan behandle, vedta og følge opp på en tilstrekkelig måte. Det bør også være begrenset hvor mange anmodningsvedtak Stortinget pålegger forvaltningen å følge opp.

Bruk av flertallsmerknader i komiteinnstillinger kan være et alternativ til anmodningsvedtak siden oppfølgingen av disse alltid har vært en del av Stortingets kontrollarbeid. Flertallsinnstillinger vil heller ikke være bindende over stortingsperioden der innstillingen ble fattet. Siden årets stortingsmelding er tykkere enn noensinne bør alternative vedtaksformer for å fremme stortingsflertallets meninger vurderes.

by Herman Westrum Thorsen (herman.westrum.thorsen@gmail.com) atFebruary 14, 2017 08:57 AM

February 13, 2017

Mimes brønn

En innsynsbrønn full av kunnskap

Mimes brønn er en nettjeneste som hjelper deg med å be om innsyn i offentlig forvaltning i tråd med offentleglova og miljøinformasjonsloven. Tjenesten har et offentlig tilgjengelig arkiv over alle svar som er kommet på innsynsforespørsler, slik at det offentlige kan slippe å svare på de samme innsynshenvendelsene gang på gang. Du finner tjenesten på


I følge gammel nordisk mytologi voktes kunnskapens kilde av Mime og ligger under en av røttene til verdenstreet Yggdrasil. Å drikke av vannet i Mimes brønn ga så verdifull kunnskap og visdom at den unge guden Odin var villig til å gi et øye i pant og bli enøyd for å få lov til å drikke av den.

Nettstedet vedlikeholdes av foreningen NUUG og er spesielt godt egnet for politisk interesserte personer, organisasjoner og journalister. Tjenesten er basert på den britiske søstertjenesten WhatDoTheyKnow.com, som allerede har gitt innsyn som har resultert i dokumentarer og utallige presseoppslag. I følge mySociety for noen år siden gikk ca 20 % av innsynshenvendelsene til sentrale myndigheter via WhatDoTheyKnow. Vi i NUUG håper NUUGs tjeneste Mimes brønn kan være like nyttig for innbyggerne i Norge.

I helgen ble tjenesten oppdatert med mye ny funksjonalitet. Den nye utgaven fungerer bedre på små skjermer, og viser nå leveringsstatus for henvendelsene slik at innsender enklere kan sjekke at mottakers epostsystem har bekreftet mottak av innsynshenvendelsen. Tjenesten er satt opp av frivillige i foreningen NUUG på dugnad, og ble lansert sommeren 2015. Siden den gang har 121 brukere sendt inn mer enn 280 henvendelser om alt fra bryllupsutleie av Operaen og forhandlinger om bruk av Norges topp-DNS-domene .bv til journalføring av søknader om bostøtte, og nettstedet er en liten skattekiste av interessant og nyttig informasjon. NUUG har knyttet til seg jurister som kan bistå med å klage på manglende innsyn eller sviktende saksbehandling.

– «NUUGs Mimes brønn var uvurderlig da vi lyktes med å sikre at DNS-toppdomenet .bv fortsatt er på norske hender,» forteller Håkon Wium Lie.

Tjenesten dokumenterer svært sprikende praksis i håndtering av innsynshenvendelser, både når det gjelder responstid og innhold i svarene. De aller fleste håndteres raskt og korrekt, men det er i flere tilfeller gitt innsyn i dokumenter der ansvarlig etat i ettertid ønsker å trekke innsynet tilbake, og det er gitt innsyn der sladdingen har vært utført på en måte som ikke skjuler informasjonen som skal sladdes.

– «Offentlighetsloven er en bærebjelke for vårt demokrati. Den bryr seg ikke med hvem som ber om innsyn, eller hvorfor. Prosjektet Mimes brønn innebærer en materialisering av dette prinsippet, der hvem som helst kan be om innsyn og klage på avslag, og hvor dokumentasjon gjøres offentlig. Dette gjør Mimes Brønn til et av de mest spennende åpenhetsprosjektene jeg har sett i nyere tid.» forteller mannen som fikk åpnet opp eierskapsregisteret til skatteetaten, Vegard Venli.

Vi i foreningen NUUG håper Mimes brønn kan være et nyttig verktøy for å holde vårt demokrati ved like.

by Mimes Brønn atFebruary 13, 2017 02:07 PM

January 28, 2017

NUUG Foundation

Reisestipend for studenter - 2017

NUUG Foundation utlyser reisestipender for 2017. Søknader kan sendes inn til enhver tid.

January 28, 2017 12:57 PM

January 06, 2017

Espen Braastad

CentOS 7 root filesystem on tmpfs

Several years ago I wrote a series of posts on how to run EL6 with its root filesystem on tmpfs. This post is a continuation of that series, and explains step by step how to run CentOS 7 with its root filesystem in memory. It should apply to RHEL, Ubuntu, Debian and other Linux distributions as well. The post is a bit terse to focus on the concept, and several of the steps have potential for improvements.

The following is a screen recording from a host running CentOS 7 in tmpfs:


Build environment

A build host is needed to prepare the image to boot from. The build host should run CentOS 7 x86_64, and have the following packages installed:

yum install libvirt libguestfs-tools guestfish

Make sure the libvirt daemon is running:

systemctl start libvirtd

Create some directories that will be used later, however feel free to relocate these to somewhere else:

mkdir -p /work/initramfs/bin
mkdir -p /work/newroot
mkdir -p /work/result

Disk image

For simplicity reasons we’ll fetch our rootfs from a pre-built disk image, but it is possible to build a custom disk image using virt-manager. I expect that most people would like to create their own disk image from scratch, but this is outside the scope of this post.

Use virt-builder to download a pre-built CentOS 7.3 disk image and set the root password:

virt-builder centos-7.3 -o /work/disk.img --root-password password:changeme

Export the files from the disk image to one of the directories we created earlier:

guestfish --ro -a /work/disk.img -i copy-out / /work/newroot/

Clear fstab since it contains mount entries that no longer apply:

echo > /work/newroot/etc/fstab

SELinux will complain about incorrect disk label at boot, so let’s just disable it right away. Production environments should have SELinux enabled.

echo "SELINUX=disabled" > /work/newroot/etc/selinux/config

Disable clearing the screen on login failure to make it possible to read any error messages:

mkdir /work/newroot/etc/systemd/system/getty@.service.d
cat > /work/newroot/etc/systemd/system/getty@.service.d/noclear.conf << EOF


We’ll create our custom initramfs from scratch. The boot procedure will be, simply put:

  1. Fetch kernel and a custom initramfs.
  2. Execute kernel.
  3. Mount the initramfs as the temporary root filesystem (for the kernel).
  4. Execute /init (in the initramfs).
  5. Create a tmpfs mount point.
  6. Extract our CentOS 7 root filesystem to the tmpfs mount point.
  7. Execute switch_root to boot on the CentOS 7 root filesystem.

The initramfs will be based on BusyBox. Download a pre-built binary or compile it from source, put the binary in the initramfs/bin directory. In this post I’ll just download a pre-built binary:

wget -O /work/initramfs/bin/busybox https://www.busybox.net/downloads/binaries/1.26.1-defconfig-multiarch/busybox-x86_64

Make sure that busybox has the execute bit set:

chmod +x /work/initramfs/bin/busybox

Create the file /work/initramfs/init with the following contents:

#!/bin/busybox sh

# Dump to sh if something fails
error() {
	echo "Jumping into the shell..."
	setsid cttyhack sh

# Populate /bin with binaries from busybox
/bin/busybox --install /bin

mkdir -p /proc
mount -t proc proc /proc

mkdir -p /sys
mount -t sysfs sysfs /sys

mkdir -p /sys/dev
mkdir -p /var/run
mkdir -p /dev

mkdir -p /dev/pts
mount -t devpts devpts /dev/pts

# Populate /dev
echo /bin/mdev > /proc/sys/kernel/hotplug
mdev -s

mkdir -p /newroot
mount -t tmpfs -o size=1500m tmpfs /newroot || error

echo "Extracting rootfs... "
xz -d -c -f rootfs.tar.xz | tar -x -f - -C /newroot || error

mount --move /sys /newroot/sys
mount --move /proc /newroot/proc
mount --move /dev /newroot/dev

exec switch_root /newroot /sbin/init || error

Make sure it is executable:

chmod +x /work/initramfs/init

Create the root filesystem archive using tar. The following command also uses xz compression to reduce the final size of the archive (from approximately 1 GB to 270 MB):

cd /work/newroot
tar cJf /work/initramfs/rootfs.tar.xz .

Create initramfs.gz using:

cd /work/initramfs
find . -print0 | cpio --null -ov --format=newc | gzip -9 > /work/result/initramfs.gz

Copy the kernel directly from the root filesystem using:

cp /work/newroot/boot/vmlinuz-*x86_64 /work/result/vmlinuz


The /work/result directory now contains two files with file sizes similar to the following:

ls -lh /work/result/
total 277M
-rw-r--r-- 1 root root 272M Jan  6 23:42 initramfs.gz
-rwxr-xr-x 1 root root 5.2M Jan  6 23:42 vmlinuz

These files can be loaded directly in GRUB from disk, or using iPXE over HTTP using a script similar to:

kernel http://example.com/vmlinuz
initrd http://example.com/initramfs.gz

January 06, 2017 08:34 PM

November 12, 2016

Anders Einar Hilden

Perl Regexp Oneliners and UTF-8

For my project to find as many .no domains as possible, I needed a regexp for extracting valid domains. This task is made more fun by the inclusion of Norwegian and Sami characters in the set of valid characters.

In addition to [a-z0-9\-], valid dot-no domains can contain the Norwegian æ (ae), ø (o with stroke) and å (a with ring above) (Stargate, anyone?) and a number of Sami characters. ŧ (t with stroke), ç (c with cedilla) and ŋ (simply called “eng”) are some of my favourites.

The following code will print only the first match per line, and uses ŧ directly in the regexp.

echo "fooŧ.no baŧ.no" | perl -ne 'if(/([a-zŧ]{2,63}\.no)/ig) { print $1,"\n"; }'

If we replace if with while we will print any match found in the whole line.

echo "fooŧ.no baŧ.no" | perl -ne 'while(/([a-zŧ]{2,63}\.no)/ig) { print $1,"\n"; }'

Because I’m afraid the regexp (specifically the non-ASCII characters) may be mangled by being saved and moved between systems, I want to write the Norwegian and Sami characters using their Unicode code points. Perl has support for this using \x{<number>} (see perl unicode)

echo "fooŧ.no baŧ.no" | perl -CSD -ne 'while(/([a-z\x{167}]{2,63}\.no)/ig) { print $1,"\n"; }'

When using code points, I have to specify -CSD for the matching to work. I am not really sure why this is required. If you can explain, please comment or tell my by other means. As you can read in perlrun, -CSD specifies that STDIN, STDOUT, STDERR and all input and output streams should be treated as being UTF-8.

Another problem is that if this last solution is is fed invalid UTF-8, it will die fatally and stop processing input.

Malformed UTF-8 character (fatal) at -e line 1, <> line X.

To prevent this happening I currently sanitize my dirty input using iconv -f utf-8 -t utf-8 -c. If you have a better solution for this, Perl or otherwise, please tell me!.

A simple regexp would match the valid characters for a length between 2 and 63 followed by .no. However, I wanted only and all “domains under .no” as counted by Norid in their statistics. Norids definition of “domains under .no” are all the domains directly under .no, but also domains under category domains i.e. ohv.oslo.no and ola.priv.no. To get comparable results, I have to collect both *.no and *.<category domain>.no domains when scraping data.

The resulting “oneliner” I use is this…. It once was a oneliner, but with more than 10k characters in the regexp it was hard to manage. The resulting script builds up a regexp that is valid for all Norwegian domains using a list of valid category domains, all valid characters and other rules for .no domains.

November 12, 2016 10:00 PM

September 18, 2016

Dag-Erling Smørgrav

Not up to our usual standards

For a few years now, I’ve been working on and off on a set of libraries which collect cryptography- and security-related code I’ve written for other projects as well as functionality which is not already available under a permissive license, or where existing implementations do not meet my expectations of cleanliness, readability, portability and embeddability.

(Aside: the reasons why this has taken years, when I initially expected to publish the first release in the spring or summer of 2014, are too complex to explain here; I may write about them at a later date. Keywords are health, family and world events.)

Two of the major features of that collection are the OATH Authentication Methods (which includes the algorithm used by Google Authenticator and a number of commercial one-time code fobs) and the Common Platform Enumeration, part of the Security Content Automation Protocol. I implemented the former years ago for my employer, and it has languished in the OpenPAM repository since 2012. The latter, however, has proven particularly elusive and frustrating, to the point where it has existed for two years as merely a header file and a set of mostly empty functions, just to sketch out the API. I decided to have another go at it yesterday, and actually made quite a bit of progress, only to hit the wall again. And this morning, I realized why.

The CPE standard exists as a set of NIST Interagency reports: NISTIR 7695 (naming), NISTIR 7696 (name matching), NISTIR 7697 (dictionary) and NISTIR 7698 (applicability language). The one I’ve been struggling with is 7695—it is the foundation for the other three, so I can’t get started on them until I’m done with 7695.

It should have been a breeze. On the surface, the specification seems quite thorough: basic concepts, representations, conversion between representations (including pseudocode). You know the kind of specification that you can read through once, then sit down at the computer, start from the top, and code your way down to the bottom? RFC 4226 and RFC 6238, which describe OATH event-based and time-based one-time passwords respectively, are like that. NISTIR 7695 looks like it should be. But it isn’t. And I’ve been treating it like it was, with my nose so close to the code that I couldn’t see the big picture and realize that it is actually not very well written at all, and that the best way to implement it is to read it, understand it, and then set it aside before coding.

One sign that NISTIR 7695 is a bad specification is the pseudocode. It is common for specifications to describe algorithms, protocols and / or interfaces in the normative text and provide examples, pseudocode and / or a reference implementation (sometimes of dubious quality, as is the case for RFC 4226 and RFC 6238) as non-normative appendices. NISTIR 7695, however, eschews natural-language descriptions and includes pseudocode and examples in the normative text. By way of example, here is the description of the algorithm used to convert (“bind”, in their terminology) a well-formed name to a formatted string, in its entirety: Summary of algorithm

The procedure iterates over the eleven allowed attributes in a fixed order. Corresponding attribute values are obtained from the input WFN and conversions of logical values are applied. A result string is formed by concatenating the attribute values separated by colons.

This is followed by one page of pseudocode and two pages of examples. But the examples are far from exhaustive; as unit tests, they wouldn’t even cover all of the common path, let alone any of the error handling paths. And the pseudocode looks like it was written by someone who learned Pascal in college thirty years ago and hasn’t programmed since.

The description of the reverse operation, converting a formatted string to a well-formed name, is slightly better in some respects and much worse in others. There is more pseudocode, and the examples include one—one!—instance of invalid input… but the pseudocode includes two functions—about one third of the total—which consist almost entirely of comments describing what the functions should do, rather than actual code.

You think I’m joking? Here is one of them:

function get_comp_fs(fs,i)
  ;; Return the i’th field of the formatted string. If i=0,
  ;; return the string to the left of the first forward slash.
  ;; The colon is the field delimiter unless prefixed by a
  ;; backslash.
  ;; For example, given the formatted string:
  ;; cpe:2.3:a:foo:bar\:mumble:1.0:*:...
  ;; get_comp_fs(fs,0) = "cpe"
  ;; get_comp_fs(fs,1) = "2.3"
  ;; get_comp_fs(fs,2) = "a"
  ;; get_comp_fs(fs,3) = "foo"
  ;; get_comp_fs(fs,4) = "bar\:mumble"
  ;; get_comp_fs(fs,5) = "1.0"
  ;; etc.

This function shouldn’t even exist. It should just be a lookup in an associative array, or a call to an accessor if the pseudocode was object-oriented. So why does it exist? Because the main problem with NISTIR 7695, which I should have identified on my first read-through but stupidly didn’t, is that it assumes that implementations would use well-formed names—a textual representation of a CPE name—as their internal representation. The bind and unbind functions, which should be described in terms of how to format and parse URIs and formatted strings, are instead described in terms of how to convert to and from WFNs. I cannot overstate how wrong this is. A specification should never describe a particular internal representation, except in a non-normative reference implementation, because it prevents conforming implementations from choosing more efficient representations, or representations which are better suited to a particular language and environment, and because it leads to this sort of nonsense.

So, is the CPE naming specification salvageable? Well, it includes complete ABNF grammars for URIs and formatted strings, which is good, and a partial ABNF grammar for well-formed names, which is… less good, but fixable. It also explains the meanings of the different fields; it would be useless otherwise. But apart from that, and the boilerplate at the top and bottom, it should be completely rewritten, including the pseudocode and examples, which should reference fictional, rather than real, vendors and products. Here is how I would structure it (text in italic is carried over from the original):

  1. Introduction
    1.1. Purpose and scope
    1.2. Document structure
    1.3. Document conventions
    1.4. Relationship to existing specifications and standards
  2. Definitions and abbreviations
  3. Conformance
  4. CPE data model
    4.1 Required attributes
    4.2 Optional attributes
    4.3 Special attribute values
  5. Textual representations
    5.1. Well-formed name
    5.2. URI
    5.3. Formatted string
  6. API
    6.1. Creating and destroying names
    6.2. Setting and getting attributes
    6.3. Binding and unbinding
  7. Non-normative examples
    7.1. Valid and invalid attribute values
    7.2. Valid and invalid well-formed names
    7.3. Valid and invalid URIs
    7.4. Valid and invalid formatted strings
  8. Non-normative pseudo-code
  9. References
  10. Change log

I’m still going to implement CPE naming, but I’m going to implement it the way I think the standard should have been written, not the way it actually was written. Amusingly, the conformance chapter is so vague that I can do this and still claim conformance with the Terrible, Horrible, No Good, Very Bad specification. And it should only take a few hours.

By the way, if anybody from MITRE or NIST reads this and genuinely wants to improve the specification, I’ll be happy to help.

PS: possibly my favorite feature of NISTIR 7695, and additional proof that the authors are not programmers: the specification mandates that WFNs are UTF-8 strings, which are fine for storage and transmission but horrible to work with in memory. But in the next sentence, it notes that only characters with hexadecimal values between x00 and x7F may be used, and subsequent sections further restrict the set of allowable characters. In case you didn’t know, the normalized UTF-8 representation of a sequence of characters with hexadecimal values between x00 and x7F is identical, bit by bit, to the ASCII representation of the same sequence.

by Dag-Erling Smørgrav atSeptember 18, 2016 01:54 PM

August 24, 2016

Nicolai Langfeldt

The most important thing about Apache

The most important thing you should know about Apache, the web-server, is that if you feel stupid after trying to get something working on it - this is perfectly normal - and does not mean you are stupid.

Try NGiNX next time.  If it supports the thing you need.

by Nicolai Langfeldt (noreply@blogger.com) atAugust 24, 2016 08:26 PM

July 15, 2016

Mimes brønn

Hvem har drukket fra Mimes brønn?

Mimes brønn har nå vært oppe i rundt et år. Derfor vi tenkte det kunne være interessant å få en kortfattet statistikk om hvordan tjenesten er blitt brukt.

I begynnelsen av juli 2016 hadde Mimes brønn 71 registrerte brukere som hadde sendt ut 120 innsynshenvendelser, hvorav 62 (52%) var vellykkede, 19 (16%) delvis vellykket, 14 (12%) avslått, 10 (8%) fikk svar at organet ikke hadde informasjonen, og 12 henvendelser (10%; 6 fra 2016, 6 fra 2015) fortsatt var ubesvarte. Et fåtall (3) av hendvendelsene kunne ikke kategoriseres. Vi ser derfor at rundt to tredjedeler av henvendelsene var vellykkede, helt eller delvis. Det er bra!

Tiden det tar før organet først sender svar varierer mye, fra samme dag (noen henvendelser sendt til Utlendingsnemnda, Statens vegvesen, Økokrim, Mediatilsynet, Datatilsynet, Brønnøysundregistrene), opp til 6 måneder (Ballangen kommune) eller lenger (Stortinget, Olje- og energidepartementet, Justis- og beredskapsdepartementet, UDI – Utlendingsdirektoratet, og SSB har mottatt innsynshenvendelser som fortsatt er ubesvarte). Gjennomsnittstiden her var et par uker (med unntak av de 12 tilfellene der det ikke har kommet noe svar). Det følger av offentlighetsloven § 29 første ledd at henvendelser om innsyn i forvaltningens dokumenter skal besvares «uten ugrunnet opphold», noe som ifølge Sivilombudsmannen i de fleste tilfeller skal fortolkes som «samme dag eller i alle fall i løpet av 1-3 virkedager». Så her er det rom for forbedring.

Klageretten (offentleglova § 32) ble benyttet i 20 av innsynshenvendelsene. I de fleste (15; 75%) av tilfellene førte klagen til at henvendelsen ble vellykket. Gjennomsnittstiden for å få svar på klagen var en måned (med unntak av 2 tillfeller, klager sendt til Statens vegvesen og Ruter AS, der det ikke har kommet noe svar). Det er vel verdt å klage, og helt gratis! Sivilombudsmannen har uttalt at 2-3 uker ligger over det som er akseptabel saksbehandlingstid for klager.

Flest henvendelser var blitt sendt til Utenriksdepartementet (9), tett etterfulgt av Fredrikstad kommune og Brønnøysundregistrene. I alt ble henvendelser sendt til 60 offentlige myndigheter, hvorav 27 ble tilsendt to eller flere. Det står over 3700 myndigheter i databasen til Mimes brønn. De fleste av dem har dermed til gode å motta en innsynshenvendelse via tjenesten.

Når vi ser på hva slags informasjon folk har bedt om, ser vi et bredt spekter av interesser; alt fra kommunens parkeringsplasser, reiseregninger der statens satser for overnatting er oversteget, korrespondanse om asylmottak og forhandlinger om toppdomenet .bv, til dokumenter om Myanmar.

Myndighetene gjør alle mulige slags ting. Noe av det gjøres dårlig, noe gjør de bra. Jo mer vi finner ut om hvordan  myndighetene fungerer, jo større mulighet har vi til å foreslå forbedringer på det som fungerer dårlig… og applaudere det som  bra.  Er det noe du vil ha innsyn i, så er det bare å klikke på https://www.mimesbronn.no/ og så er du i gang 🙂

by Mimes Brønn atJuly 15, 2016 03:56 PM

June 01, 2016

Kevin Brubeck Unhammer

Maskinomsetjing vs NTNU-eksaminator

Twitter-brukaren @IngeborgSteine fekk nyleg ein del merksemd då ho tvitra eit bilete av nynorskutgåva av økonomieksamenen sin ved NTNU:

Dette var min økonomieksamen på "nynorsk". #nynorsk #noregsmållag #kvaialledagar https://t.co/RjCKSU2Fyg
Ingeborg Steine (@IngeborgSteine) May 30, 2016

Kreative nyvinningar som *kvisleis og alle dialektformene og arkaismane ville vore usannsynlege å få i ei maskinomsett utgåve, så då lurte eg på kor mykje betre/verre det hadde blitt om eksaminatoren rett og slett hadde brukt Apertium i staden? Ingeborg Steine var så hjelpsam at ho la ut bokmålsutgåva, så då får me prøva 🙂


Ingen kvisleis og fritt for tær og fyr, men det er heller ikkje perfekt: Visse ord manglar frå ordbøkene og får dermed feil bøying, teller blir tolka som substantiv, ein anna maskin har feil bøying på førsteordet (det mangla ein regel der) og at blir ein stad tolka som adverb (som fører til det forunderlege fragmentet det verta at anteke tilvarande). I tillegg blir språket gjenkjent som tatarisk av nettsida, så det var kanskje litt tung norsk? 🙂 Men desse feila er ikkje spesielt vanskelege å retta på – utviklingsutgåva av Apertium gir no:


Det er enno eit par småting som kunne vore retta, men det er allereie betre enn dei fleste eksamenane eg fekk utdelt ved UiO …

by k atJune 01, 2016 09:45 AM

May 29, 2016

Espen Braastad

Filebin upgrade

https://filebin.net is a public and free file upload/sharing service. Its main design principle is to be incredibly simple to use.

It has been in production for several years, and has more or less been unmodified until now. Today it has been upgraded in several ways, and this post aims to elaborate on some of the changes.

Complete rewrite

The previous version of Filebin was written in Python and kept meta data in MongoDB. For a number of reasons, Filebin has been completely rewritten in Go. It does no longer depend on any database except the local filesystem.

Some of the most visible changes are:

New hardware and software stack

The infrastructure, bandwidth and hardware needed to run filebin.net is sponsored by Redpill Linpro, the leading provider of professional Open Source services and products in the Nordic region.

As part of todays upgrade, filebin.net has been migrated into their awesome IaaS cloud which is based on OpenStack and Ceph, runs on modern hardware and spans multiple locations.

The source code of Filebin is available in Github. Bugs are reported and tracked in Github issues.

Feel free to reach out with feedback and suggestions by email to espebra(a)ifi.uio.no, or by leaving a comment to this blog post.

May 29, 2016 06:40 PM

April 02, 2016

Thomas Sødring

Choices made on the road to 0.1

You can drive yourself mad wondering if you made the right choice with regards to technology. This really is a difficult question to answer as you have to pick components that have longevity and that are in widespread use. The truth is that you just have to pick something and go with it. I think about popularity of libraries, how active development is etc before I make a choice but it’s not easy to just decide. Any component I use now will follow the project and code for a long time going forward.

This week I was wrestling with AngularJS and ReactJS. Basically it boils down to whether or not  I go with Google or Facebook. I picked up some cheap courses on Angular and that kinda made that decision. I’m not really that bothered by the GUI side of things at the moment, but I do need an administrative GUI and would like to have an idea how a proof-of-concept GUI would look. Given that this is a REST service, it will be possible to swap Angular out with whatever you want anyway. It is a very time consuming process trying to figure these things out.

The last month has been spent wondering how I should structure the project. If I get the foundation wrong, it will have a negative effect on the project.  Baeldung has an interesting project structure with a clean definition of modules and what should be within the modules. This quickly became the basis of my project structure. I kept coming across jhipster and after days of hassle (installing npm, bower, yo) getting it installed I managed to get an interesting project setup. What I learnt from the jhipster sample app was support for swagger, metrics, spring-security, angulaJS and yaml project configuration. I was initially unable to get the jhipster app to run so I have spent the time studying the code and structure and gradually copied elements over to my project. This has resulted in the nikita code base supporting swagger, the introduction of metrics support, and spring security  user configuration, all copied from  the jhipster sample application.

This approach has really saved me a lot of time and answered many questions about spring and spring-based applications. I have learnt so much from this approach. A plus with such an approach where I try to study best practices is that I hopefully will end up with a good project structure and robust code. A negative is that I’m learning as I go along. Ideally I’d sit down and figure everything out in advance but I think that’s the primary reason why it has been difficult to move this project forward over the last couple of years, I never had my own concrete project structure to work with and was unsure how to proceed.

I also switched coding from Eclipse to Eclipse STS to IntelliJ Idea.  I never seemed to be able to get things working nicely in Eclipse and STS. I always ended up with issues like it wasn’t possible to find   source code when debugging  or download sources and documentation didn’t work properly. I spent a lot of time on stackexchange but it really felt like a waste of time and I didn’t have an environment I felt comfortable and productive in. Idea has been a dream to work with. It just does things intuitively and the integration with git has allowed me to push code and changes quickly to githhub. I have never been so impressed with an IDE as I have been with Idea. It just seems to make sense.

I was also able to confirm that OData support is still in the draft version of Noark 5 v4 and will more than likely be in the final version. This complicates development of the REST-service significantly but I think I will solve this in the  codebase by supporting two apis, one with OData and one without. The reason for this is that OData support requires me to handle all incoming HTTP requests manually. To be honest I am unsure about the usefulness of  OData in a running installation, but if the standard specifies it then we simply will have to implement it. There is very little REST OData support in the java ecosystem, but there is something available that we can use.

Currently the code is very much a pre alpha version of v0.1. It  is mainly a working  project structure with the above mentioned libraries and  with the domain model copied in and the fonds object is accessible via a REST controller. Don’t expect the code to work until it hits the v0.1 mark as I am updating it continuously. You can check out the code from the github repository.

The post Choices made on the road to 0.1 appeared first on Arkivets rolle i en tjenesteorientert arkitektur.

by tsodring atApril 02, 2016 05:38 AM

April 01, 2016

Thomas Sødring

Current project structure

One of the main challenges with this project is that I am not in a position to work on it full time. In the last month I have probably spent 80 hours and half of that is coming from my own free time. So whatever time I do have has to be spent wisely. I have few days to thoroughly explore issues and threads of thoughts are split up over several days.

In the last month I have made some interesting progress. I have spent the time working on the project structure and have moved files around quite a lot.

Currently the project is a multi module maven project with the following modules

core-client is where most of the domain modelling of Noark 5 can be found. All persistence related objects are here, DTO’s etc.

core-common contains a lot of common functionality related to REST handling etc. This is code that could be reused in other Noark 5 REST related projects.

core-conversion will be a REST-service that can convert documents from a production format to an archive format. I will only implement integration to LibreOffice but it is easily imaginable to implement integration to MS Office. I haven’t started this yet.

core-extraction will be a standalone executable jar that can extract the contents of the core in accordance with the extraction rules. Currently a weak arkivstruktur.xml generator has been implemented and that’s just to show a proof-of-concept.

core-webapp is the actual web application that is a spring-boot application and starts up a REST service.

Another module that needs to be implemented will be core-postjournal that talks to the database and publishes postjournal  in various formats. Integration with altinn and digipost etc (core-dispatcher) all are obvious candidates for work, but currently the project needs a clear defined roadmap so these can all come later.

All the modules are encapsulated inside a parent module called nikita-noark5-core.

The post Current project structure appeared first on Arkivets rolle i en tjenesteorientert arkitektur.

by tsodring atApril 01, 2016 01:44 PM

February 17, 2016

Dag-Erling Smørgrav

FreeBSD and CVE-2015-7547

As you have probably heard by now, a buffer overflow was recently discovered in GNU libc’s resolver code which can allow a malicious DNS server to inject code into a vulnerable client. This was announced yesterday as CVE-2015-7547. The best sources of information on the bug are currently Google’s Online Security Blog and Carlos O’Donnell’s in-depth analysis.

Naturally, people have started asking whether FreeBSD is affected. The FreeBSD Security Officer has not yet released an official statement, but in the meantime, here is a brief look at the issue as far as FreeBSD is concerned.

First of all: neither FreeBSD itself nor native FreeBSD applications are affected. While the resolver in FreeBSD’s libc and GNU libc share a common ancestry, the bug was introduced when the latter was rewritten to send A and AAAA queries in parallel rather than sequentially when the application requests both.

However, Linux applications running under emulation on a FreeBSD system use the GNU libc and are therefore vulnerable unless patched. I believe, but have not verified, that the linux_base-c6 port contains a vulnerable version of GNU libc while the older linux_base-f10 port does not. For now, it is safest to assume that they are both vulnerable.

UPDATE 2016-02-17 18:40 UTC: the linux_base-c6 port has been updated in ports-head, 2016Q1 branch to follow, no word on linux_base-f10

UPDATE 2016-02-18 00:15 UTC: the quarterly branch has been updated

The issue can be mitigated by only using resolvers you trust, and configuring them to avoid sending responses which can trigger the bug.

If you already have your own resolvers, you can configure them to avoid sending UDP responses larger than 2048 bytes. If the response does not fit in 2048 bytes, the server will send a truncated response, and the client should retry using TCP. While a similar bug exists in the code path for TCP requests, I believe that it can only be exploited by a malicious resolver, and interposing your own resolver will protect affected Linux systems and applications.

If you do not already have your own resolvers, you can set one up in a couple of minutes as follows:

# pkg install -y unbound
# fetch -o /usr/local/etc/unbound https://blog.des.no/wp-content/uploads/2016/02/unbound.conf
# vi /usr/local/etc/unbound/unbound.conf
  ... fix access-control statements as mentioned in comments ...
# echo 'unbound_enable="yes"' >>/etc/rc.conf
# service unbound start

(link to the config file)

You must then edit /etc/resolv.conf on all affected systems to point to your new resolver. If you are running Linux applications under emulation, make sure that there is no /compat/linux/etc/resolv.conf, as it would override the system-wide /etc/resolv.conf.

Note that unlike FreeBSD libc, GNU libc does not automatically pick up changes to /etc/resolv.conf, so you will have to restart all affected Linux services and applications.

by Dag-Erling Smørgrav atFebruary 17, 2016 05:53 PM

December 16, 2015

NUUG Foundation

Reisestipend for studenter - 2016

NUUG Foundation utlyser reisestipender for 2016. Søknader kan sendes inn til enhver tid.

December 16, 2015 11:28 AM

October 18, 2015

Anders Nordby

Fighting spam with SpamAssassin, procmail and greylisting

On my private server we use a number of measures to stop and prevent spam from arriving in the users inboxes: - postgrey (greylisting) to delay arrival (hopefully block lists will be up to date in time to stop unwanted mail, also some senders do not retry) - SpamAssasin to block mails by scoring different aspects of the emails. Newer versions of it has URIBL (domain based, for links in the emails) in addtition to the tradional RBL (IP based) block lists. Which works better. I also created my own URIBL block list which you can use, dbl.fupp.net. - Procmail. For user on my server, I recommend this procmail rule: :0 * ^X-Spam-Status: Yes .crapbox/ It will sort emails that has a score indicating it is spam into mailbox "crapbox". - blocking unwanted and dangerous attachments, particularly for Windows users.

by Anders (noreply@blogger.com) atOctober 18, 2015 01:09 PM

April 23, 2015

Kevin Brubeck Unhammer


I førre innlegg i denne serien gjekk eg kort gjennom ymse metodar for å generera omsetjingskandidatar til tospråklege ordbøker; i dette innlegget skal eg gå litt meir inn på kandidatgenerering ved omsetjing av enkeltdelane av samansette ord. Me har som nemnt allereie ei ordbok mellom bokmål og nordsamisk, som me vil utvida til bokmål–lulesamisk og bokmål–sørsamisk. Og ordboka blei utvikla for å omsetja typisk «departementsspråk», så ho er full av lange, samansette ord. Og på samisk kan me setja saman ord omtrent på same måte som på norsk (i tillegg til ein haug med andre måtar, men det hoppar me glatt over for no). Dette bør me kunna utnytta, sånn at viss me veit kva «klage» er på lulesamisk, og me veit kva «frist» er, så har me iallfall éin fornuftig hypotese for kva «klagefrist» kan vera på lulesamisk 🙂

Orddeling er flott når du skal omsetja ordbøker. Særskrivingsfeil er flott når du vil smila litt.
«Ananássasuorma» jali «ananássa riŋŋgu»? Ij le buorre diehtet.

Altså kan me bruka dei få omsetjingane me allereie har mellom bokmål og lulesamisk/sørsamisk til å laga fleire omsetjingar, ved å omsetja deler av ord, og så setja dei saman igjen. Me har òg eit par omsetjingar liggande mellom nordsamisk og lulesamisk/sørsamisk, så me kan bruka same metoden der (og utnytta det at me har ei bokmål–nordsamisk-ordbok til å slutta riŋgen tilbake til bokmål).

Dekning og presisjon

Dessverre (i denne samanhengen) har me òg ofte fleire omsetjingar av kvart ord; i dei eksisterande bokmål–lulesamisk-ordbøkene me ser på (i stor grad basert på ordboka til Anders Kintel) står det at «klage» kan vera mellom anna gujdalvis, gujddim, luodjom eller kritihkka, medan «frist» kan vera  ájggemierre, giehtadaláduvvat, mierreduvvam eller ájggemærráj. Viss me tillet kvar venstredel å gå med kvar høgredel, får me 16 moglege kandidatar for dette eine ordet! Sannsynlegvis er ikkje meir enn ein eller to av dei brukande (og kanskje ikkje det ein gong). I snitt får me rundt dobbelt så mange kandidatar som kjeldeord med denne metoden. Så me bør finna metodar for å kutta ned på dårlege kandidatar.

Den komplementære utfordringa er å få god nok dekning. Av og til ser me at me ikkje har ei omsetjing av delane av ordet, sjølv om me har omsetjingar av ord med dei same delene i seg. Den setninga krev nok eit døme 🙂 Me vil gjerne ha ein kandidat for ordet «øyekatarr» på lulesamisk, altså samansetjinga «øye+katarr». Me har kanskje ei omsetjing for «øye» i materialet vårt, men ingenting for «katarr». Derimot står det at «blærekatarr» er gådtjåráhkkovuolssje. Så for å utvida dekninga, kan me i tillegg dela opp kjeldematerialet vårt i alle par av samansetjingsdelar; viss me veit at desse orda kan analyserast som «blære+katarr» og gådtjåráhkko+vuolssje, så kan det jo synast som at «blære» er gådtjåráhkko og «katarr» er vuolssje (og Giellatekno har heldigvis gode morfologiske analysatorar som fint deler opp slike ord på rette staden). Og dette gir ei god utviding av materialet – faktisk får me kandidatar for nesten dobbelt så mange av dei orda som me ønsker kandidatar for, viss me utvidar kjeldematerialet på denne måten. Men det har ei stor ulempe òg: Me får over dobbelt så mange lule-/sørsamiske kandidatar per bokmålsord (i snitt rundt fire kandidatar per kjeldeord).

Filtrering og rangering

Me vil innskrenka dei moglege kandidatane til dei som mest sannsynleg er gode. Den beste testen er å sjå om kandidaten finst i korpus, og då helst i same parallellstilte setning (dette er oftast ein bra kandidat). Viss ikkje, så kan me òg sjå på om kandidaten og kjeldeordet har liknande frekvensar, eller om kandidaten har frekvens i det heile.

Orddelingsomsetjinga foreslo tsavtshvierhtie for «virkemiddel», og der stod dei i ein parallellsetning òg:
<s xml:lang="sma" id="2060"/>Daesnie FoU akte vihkeles tsavtshvierhtie .
<s xml:lang="nob" id="2060"/>Her er FoU er et viktig virkemiddel .

– då er det nok eit godt ordpar.

Uheldigvis har me så lite tekstgrunnlag for lule-/sørsamisk at me fort går tom for kandidatar med frekvens i det heile. For sørsamisk har me t.d. berre kandidatar med korpustreff for rundt 10 % av orda me lagar kandidatar for.

Ein annan test, som fungerer på alle ord, er å sjå om det får analyse av dei morfologiske analysatorane våre; viss ikkje (og viss det i tillegg ikkje har korpustreff) er det oftast feil. Men dette fjernar berre rundt 1/4 av kandidatane; med den oppdelte ordboka vår (kor me òg har med par av delar av ord) har me enno i snitt rundt tre kandidatar per kjeldeord.

(Ein test som eg prøvde, men avslo, var filtrering basert på liknande ordlengd. Det verkar jo logisk at lange ord blir omsett til lange og korte til korte, men det finst mange gode unntak. I tillegg fjernar det alt for få dårlege kandidatar til at det ser ut til å vera verdt det.)

Det parallelle korpusmaterialet vårt er altfor lite, men når me skal generera kandidatar til ordbøker så er det jo ikkje parallelle setningar me prøver å predikera, men parallelle ord og ordbokspar. Og då er jo læringsgrunnlaget vårt eigentleg dei eksisterande ordbøkene våre … Derfor prøvde eg å sjå på kva for samansetjingsdelar som faktisk var brukt i dei tidlegare omsetjingane våre, og kva for par av delar som ofte opptredde i tidlegare omsetjingar, og kva for delar som sjeldan eller aldri gjorde det. Til dømes har den oppdelte ordboka vår for bokmål–lulesamisk desse para:

Her ser me at «løyve» anten kan vera loahpádus eller doajmmaloahpe – skal «taxiløyve» då vera táksiloahpádus eller táksidoajmmaloahpe? På bakgrunn av dette materialet bør me nok satsa på det første – sjølv om doajmmaloahpe står oppført, så er det berre loahpádus som opptrer i samansette ord.

Då kan me prøva å generera kandidatar for alle bokmålsorda i materialet vårt, både dei me eigentleg er ute etter å finna kandidatar for, og dei me allereie har omsetjingar for. Gå så gjennom dei genererte kandidatane for dei orda me allereie har omsetjingar for, og tel opp dei para av orddelar som genererte slike ord. Me har kanskje laga kandidatane barggo+loahpádus og barggo+dajmmaloahpe for «arbeids+løyve»; når me så går gjennom dei eksisterande omsetjingane og finn at «arbeidsløyve» stod i ordboka med omsetjinga barggoloahpádus, så aukar me frekvensen til paret «løyve»–loahpádus med éin, medan «løyve»–dajmmaloahpe blir verande null.

For no har berre filtrert ut dei kandidatane kor paret til anten første- eller andreledd hadde nullfrekvens. I følgje litt manuell evaluering frå ein lingvist er det omtrent berre dårlege ord som blir kasta ut, så det filteret ser ut til å fungera bra. På den andre sida blir berre rundt 10 % av kandidatane fjerna viss me berre hiv ut dei med nullfrekvens, så neste steg blir å bruka frekvensane til å få ei full rangering.

Viss alle ord kunne delast i nøyaktig to delar, så ville det kanskje vore nok å telja opp par av delar og enkeltdelar for å estimera sannsyn, altså f(s,t)/f(s).  Men av og til kan ord delast på fleire måtar, til dømes kan me sjå på «sommersiidastyre» som «sommer+siidastyre» eller «sommersiida+styre» (eg har valt å halda meg til todelingar av ord, for å unngå for mange alternative kandidatar). Viss omsetjinga er giessesijddastivrra, med analysane giesse+sijddastivrra eller giessesijdda+stivrra, så har me ikkje utan vidare nokon grunn til å velja den eine over den andre (vel, me har lengd i dette tilfellet, men det gjeld ikkje i alle slike døme, og me kan ha par av analysar som er 2–3 eller 3–2). Då kan me heller ikkje seia kva for par av orddelar (s,t) me skal auka når me ser «sommersiidastyre»–giessesijddastivrra i treningsmaterialet. Men viss me i tillegg ser «styre»–stivvra ein annan stad, så har me plutseleg eit grunnlag til å ta ei avgjerd. Metodar som Expectation Maximization kan kombinera relaterte frekvensar på denne måten for å finna fram til gode estimat, men eg har ikkje komme så langt at eg har fått implementert dette enno.

by k atApril 23, 2015 06:11 PM

April 14, 2015

NUUG events video archive


April 14, 2015 11:13 AM

February 12, 2015

Salve J. Nilsen

On Bandwagonbuilders and Bandwagoneers

<Farnsworth>Good news, everyone!</Farnsworth>

The League of Bandwagonbuilders have spoken – Perl 6 is likely to be “production ready” sometime in 2015! This means it’s time for the Bandwagoneers to start preparing.

Bandwagoneer – that’s you and me, although you may call yourself something different. Perl Monger. Perl Enthusiast. Or just someone who has realized that all volunteer-based Open Source communities need people who care about making stuff happen in meatspace.

At Oslo.pm (I’m a board member there), we’re doing exactly that. We’re Bandwagoneers, spending some of our own valuable time showing others where the cool stuff is, and showing them how to get it. Here’s some of what we’re up to:

Also worth mentioning; a few weeks ago we also had an introduction to Perl 6’s Foreign Function Interface (called NativeCall), courtesy Arne SkjĂŚrholt. It was quite useful, and I hear Arne’s happy to accept invitations from Perl Monger groups to come visit and give the same presentation. :)

Enough bragging, already!

Being a Bandwagoneer means it’s your task to make stuff happen. There are many ways to do it, and I hope you can find some inspiration in what Oslo.pm is doing. Maybe get in touch with some of the Bandwagonbuilders in #perl6 on irc.freenode.org, and ask if anyone there would like to visit your group? I think that would be cool.

Get cracking! đŸ˜€

by sjn atFebruary 12, 2015 11:20 PM

January 06, 2015


NSA-proof SSH

ssh-pictureOne of the biggest takeaways from 31C3 and the most recent Snowden-leaked NSA documents is that a lot of SSH stuff is .. broken.

I’m not surprised, but then again I never am when it comes to this paranoia stuff. However, I do run a ton of SSH in production and know a lot of people that do. Are we all fucked? Well, almost, but not really.

Unfortunately most of what Stribika writes about the “Secure Secure Shell” doesn’t work for old production versions of SSH. The cliff notes for us real-world people, who will realistically be running SSH 5.9p1 for years is hidden in the bettercrypto.org repo.

Edit your /etc/ssh/sshd_config:

Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160
KexAlgorithms diffie-hellman-group-exchange-sha256

Basically the nice and forward secure aes-*-gcm chacha20-poly1305 ciphers, the curve25519-sha256 Kex algorithm and Encrypt-Then-MAC message authentication modes are not available to those of us stuck in the early 2000s. That’s right, provably NSA-proof stuff not supported. Upgrading at this point makes sense.

Still, we can harden SSH, so go into /etc/ssh/moduli and delete all the moduli that have 5th column < 2048, and disable ECDSA host keys:

cd /etc/ssh
mkdir -p broken
mv moduli ssh_host_dsa_key* ssh_host_ecdsa_key* ssh_host_key* broken
awk '{ if ($5 > 2048){ print } }' broken/moduli > moduli
# create broken links to force SSH not to regenerate broken keys
ln -s ssh_host_ecdsa_key ssh_host_ecdsa_key
ln -s ssh_host_dsa_key ssh_host_dsa_key
ln -s ssh_host_key ssh_host_key

Your clients, which hopefully have more recent versions of SSH, could have the following settings in /etc/ssh/ssh_config or .ssh/config:

Host all-old-servers

    Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-ripemd160
    KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Note: Sadly, the -ctr ciphers do not provide forward security and hmac-ripemd160 isn’t the strongest MAC. But if you disable these, there are plenty of places you won’t be able to connect to. Upgrade your servers to get rid of these poor auth methods!

Handily, I have made a little script to do all this and more, which you can find in my Gone distribution.

There, done.

sshh obama

Updated Jan 6th to highlight the problems of not upgrading SSH.
Updated Jan 22nd to note CTR mode isn’t any worse.
Go learn about COMSEC if you didn’t get trolled by the title.

by kacper atJanuary 06, 2015 04:33 PM

December 08, 2014


sound sound


Recently I been doing some video editing.. less editing than tweaking my system tho.
If you want your jack output to speak with Kdenlive, a most excellent video editing suite,
and output audio in a nice way without choppyness and popping, which I promise you is not nice,
you’ll want to pipe it through pulseaudio because the alsa to jack stuff doesn’t do well with phonom, at least not on this convoluted setup.

Remember, to get that setup to work, ALSA pipes to jack with the pcm.jack { type jack .. thing, and remove the alsa to pulseaudio stupidity at /usr/share/alsa/alsa.conf.d/50-pulseaudio.conf

So, once that’s in place, it won’t play even though Pulse found your Jack because your clients are defaulting out on some ALSA device… this is when you change /etc/pulse/client.conf and set default-sink = jack_out.

by kacper atDecember 08, 2014 12:18 AM

November 18, 2014

Anders Einar Hilden

Changing the Subnet Mask in Vmware Workstation on Debian Jessie

I’m currently attending SANS SEC504: Hacker Tools, Techniques, Exploits and Incident Handling in London. For some of the labs in the course we need machines on the IPs and with a subnet mask of

Changing the Subnet Mask for the NAT or host-only networks in VMware Workstation seems like such a easy thing to do. According to VMware it should be as easy as opening the Virtual Network Editor and “type a new value in the Subnet mask text box”.

Oh wait … I can’t change it. The field for subnet mask in the Virtual Network Editor is not editable.

VMware Virtual Network Editor: the field for subnet mask is not editable

Let’s keep googling - plenty matches, but everyone keeps insisting it can be changed in the GUI, or mixes the subnet mask with the subnet IP. Some posts blame permission, but since the Virtual Network Editor always runs as root, that’s not the problem. There are no listings for the vmnets in /etc/network/interfaces or /etc/network/interfaces.d/ and changing the subnet mask in NetworkManager does nothing.

After a lot of thinking (and just after I checked /etc/network/interfaces ) I found /etc/vmware/networking - BINGO! This looks like just the file we were looking for.

Before editing the file we should stop any vmware-related services that might use these files.

$ sudo service vm<TAB>
vmamqpd vmware vmware-USBArbitrator vmware-workstation-server

I’m not sure witch of these services use the files we are editing, so we’ll stop them all

$ sudo service vmamqpd
$ sudo service vmware
$ sudo service vmware-USBArbitrator
$ sudo service vmware-workstation-server

For the SANS course I have set up a new host-only network vmnet2. Since we are using static IPs, and will be running malware on these systems, I have disabled DHCP and not connected a host virtual adapter. The shared folder option Map as a network drive in Windows guests still work, don’t ask me how. Below is the configuration for vmnet2 with a subnet mask of

answer VNET_2_DHCP no
answer VNET_2_DHCP_CFG_HASH E9892EF1006EBB5D4996DF1A377B10EB0D542B94

Success! (but continue reading, we update the DHCP configuration below the picture)

VMware Virtual Network Editor: the uneditable field contains the subnet mask we wanted

VMware stores DHCP config and leases in /etc/vmware/vmnet<NUM>/dhcpd/. If we have changed the subnet IP, subnet mask, or turned on or off DHCP, these files need to be updated. The configfile contains autogenerated information surronded by “DO NOT MODIFY SECTION”, so we should probably not edit it manually.

If we open VMware Virtual Network Editor (sudo vmware-netcfg), change a setting (e.g. the subnet IP from to, save, and then change it back again, VMware will update the files for us.

November 18, 2014 02:50 PM

January 29, 2014

Nicolai Langfeldt

git log -p splitting

At work we have two related code bases.  Recently one of them received a lot of loving, and the other needed the same treatment to work better with new Perl and modules. The one ha gotten several hundred patches, and browsing that many and cherrypicking them got tiresome.  It was better for me to split the whole log into separate patches and review and apply one by one and then moving the "done" patches to a different directory.

Here, the small hack to split "git log -p" into one patch pr. file:


awk 'BEGIN { FN=0; }
     /^commit / { FN++; }
     { print $0 >> FN }' $1

by Nicolai Langfeldt (noreply@blogger.com) atJanuary 29, 2014 09:40 AM

February 24, 2013

Bjørn Venn

Chromebook; a real cloud computer – but will it work in the clouds?

<iframe allowfullscreen="" frameborder="0" height="315" src="http://www.youtube.com/embed/63ZAvyrxkOA" width="470"></iframe>

Lyst på én? Den er ikke i salg i Norge enda, men du kan kjøpe den på Amazon. Les her hvordan jeg kjøpte min på Amazon (bla litt nedover på siden). Med norsk moms, levert til Rimi-butikken 100 meter fra der jeg bor, kom den på 1.850 kroner. Det er den så absolutt verdt:)

by Bjorn Venn atFebruary 24, 2013 07:34 PM

February 22, 2013

Bjørn Venn

Hvem klarer å skaffe meg en slik før påske?

Chromebook pixel

Den nye Chromebook-en til Google, Chromebook Pixel. Foreløbig kun i salg i USA og UK via Google Play og BestBuy.

Verden er urettferdig:)

by Bjorn Venn atFebruary 22, 2013 12:44 PM

January 07, 2013

NUUG events video archive

Utfordringer innen identitetshåndtering og autentisering

Dag-Erling Smørgrav påpeker hvordan Unix har et autentiserings-paradigme som ikke har endret seg på 40 år, mens det har skjedd store utviklinger de siste årene på denne fronten.

January 07, 2013 10:00 PM

May 29, 2012

Salve J. Nilsen

Inviting to the Moving to Moose Hackathon 2012

Oslo Perl Mongers are organizing a hackathon for everyone who would like to dive deep into the details of Moose! We have invited the #p5-mop crowd to work on getting a proper Meta Object Protocol into Perl core, and we’ve invited the #perlrdf crowd to come and convert the Perl RDF toolchain to Moose.

You’re welcome to join us!

Special rebate for members of the Perl and CPAN communities

If you’re working on a project that is considering on moving to Moose then you’re especially welcome! We have a set of promo codes you can use when signing up for the hackathon. Please get in touch with us (or some of the existing participants) to get your promo code and a significant rebate!

Commercial tickets available

Would you like to support the hackathon, but don’t have access to a sponsorship budget? Does your company plan on using Moose, and sees the value of having excellent contacts in the open source communities around this technology? For you, we have a limited amount of commercial tickets. Please check out the hackathon participation page for details.

Sponsorship opportunities

The hackathon is already well sponsored, but there is room for more! If you want to support the us, please contact the organizers as soon as possible!

Who can come?

In short: Everyone who cares about Moose and object-oriented programming in Perl! We’re trying to make the Perl community better by hacking on the stuff that makes the biggest difference (at least in our eyes ;)). If you agree, you’re very welcome to join us! Check out the event site for details, and get in touch with us on IRC if you’re interested.

And finally, keep in mind Oslo.pm’s 2-point plan:

  1. Do something cool
  2. Tell about it!

See you in Stavanger?

by sjn atMay 29, 2012 02:44 PM

October 31, 2011

Anders Nordby

Taile wtmp-logg i 64-bit Linux med Perl?

Jeg liker å la ting skje hendelsesbasert, og har i den forbindelse lagd et script for å rsynce innhold etter opplasting med FTP. Jeg tailer da wtmp-loggen med Perl, og starter sync når brukeren er eller har blitt logget ut (kort idle timeout). Å taile wtmp i FreeBSD var noe jeg for lenge siden fant et fungerende eksempel på nettet:
$typedef = 'A8 A16 A16 L'; $sizeof = length pack($typedef, () ); while ( read(WTMP, $buffer, $sizeof) == $sizeof ) { ($line, $user, $host, $time) = unpack($typedef, $buffer); # Gjør hva du vil med disse verdiene her }
FreeBSD bruker altså bare verdiene line (ut_line), user (ut_name), host (ut_host) og time (ut_time), jfr. utmp.h. Linux (x64, hvem bryr seg om 32-bit?) derimot, lagrer en hel del mer i wtmp-loggen, og etter en del Googling, prøving/feiling og kikking i bits/utmp.h kom jeg frem til:
$typedef = "s x2 i A32 A4 A32 A256 s2 l i2 i4 A20"; $sizeof = length pack($typedef, () ); while ( read(WTMP, $buffer, $sizeof) == $sizeof ) { ($type, $pid, $line, $id, $user, $host, $term, $exit, $session, $sec, $usec, $addr, $unused) = unpack($typedef, $buffer); # Gjør hva du vil med disse verdiene her }
Som bare funker, flott altså. Da ser jeg i sanntid brukere som logger på og av, og kan ta handlinger basert på dette.

by Anders (noreply@blogger.com) atOctober 31, 2011 07:37 PM

A complete feed is available in any of your favourite syndication formats linked by the buttons below.

[RSS 1.0 Feed] [RSS 2.0 Feed] [Atom Feed] [FOAF Subscriptions] [OPML Subscriptions]