Førstesiden Bli medlem Kontakt Informasjon Medlemsfordeler Utvalg Kalender NUUG/HIO prisen Dokumenter Innmelding Ressurser Mailinglister Wiki Linker Om de aktive Kart NUUG i media Planet NUUG webmaster@nuug.no
Powered by Planet! Last updated: December 03, 2020 03:30 AM

Planet NUUG

November 15, 2020

Petter Reinholdtsen

Boken «Made with Creative Commons» lanseres på norsk

Endelig er den norske utgaven av «Made with Creative Commons» ferdig og publisert. Følgende pressemelding ble nettopp sendt ut:

Boken «Made with Creative Commons» lanseres på norsk

«Gjort med Creative Commons» er en bok om gjenbruk, deling og den digitale allmenningen. Boken omhandler å bygge en forretningsmodell på åpne verdier, endringene i tankesett og filosofi, og fordelene og praksisen som kommer med å være «åpen».

Forfatterne Paul Stacey og Sarah Hinchliff Pearson tar oss med inn i samtaler med 24 mennesker, prosjekter og organisasjoner som på ulike måter generere inntekter gjennom deling av sine verk. Som leser får man innsikt i hvordan alt fra forskere, forfattere, kunstnere og filmskapere tjener penger basert på åpne forretningsmodeller. En av referansestudiene i denne boken viser hvordan Blender Animation Studio lager vakre animasjonsfilmer som de publiserer under en fri lisens, basert på en plattform som er fri programvare.

Utover praktiske eksempler på forskjellige forretningsmodeller berører også boken forskjellen mellom tradisjonelle kommersielle virksomheter og de som tar utgangspunkt i den globale delingskulturen.

«Hvis du ønsker å lære mer om digital delingskultur og Creative Commons er dette en bok som både vil inspirere og gi grunnleggende innsikt» sier leder av Creative Commons Norge, Christer Solheim Gundersen. «De siste årene har denne globale bevegelsen sett en betydelig vekst med totalt over 1,6 milliarder verk med CC-lisens tilgjengelig på nett.» Nå er den tilgjengelig på norsk takket være liten gruppe frivillige entusiaster ledet av Petter Reinholdtsen. «På vegne av Creative Commons Norge vil jeg takke hver enkelt bidragsyter. Dette prosjektet er i seg selv et inspirerende eksempel på at delingskulturen også har godt fotfeste her i Norge.», avslutter Gundersen.

Boken er selvsagt fritt tilgjengelig under en Creative Commons lisens, og kan også kjøpes som ebok og papirutgave på blant annet Lulu.com og Amazon.

Lenker og kontaktinformasjon

Nå håper jeg bare den får mange lesere, og finner veien under mange juletrær.

Som vanlig, hvis du bruker Bitcoin og ønsker å vise din støtte til det jeg driver med, setter jeg pris på om du sender Bitcoin-donasjoner til min adresse 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b. Merk, betaling med bitcoin er ikke anonymt. :)

November 15, 2020 10:50 PM

October 21, 2020

Peter Hansteen (That Grumpy BSD Guy)

Does Your Email Provider Know What A "Joejob" Is?

Anecdotal evidence seems to indicate that Google and possibly other mail service providers are either quite ignorant of history when it comes to email and spam, or are applying unsavoury tactics to capture market dominance.

The first inklings that Google had reservations about delivering mail coming from my bsdly.net domain came earlier this year, when I was contacted by friends who have left their email service in the hands of Google, and it turned out that my replies to their messages did not reach their recipients, even when my logs showed that the Google mail servers had accepted the messages for delivery.

Contacting Google about matters like these means you first need to navigate some web forums. In this particular case (I won't give a direct reference, but a search on the likely keywords will likely turn up the relevant exchange), the denizens of that web forum appeared to be more interested in demonstrating their BOFHishness than actually providing input on debugging and resolving an apparent misconfiguration that was making valid mail disappear without a trace after it had entered Google's systems.

The forum is primarily intended as a support channel for people who host their mail at Google (this becomes very clear when you try out some of the web accessible tools to check domains not hosted by Google), so the only practial result was that I finally set up DKIM signing for outgoing mail from the domain, in addition to the SPF records that were already in place. I'm in fact less than fond of either of these SMTP addons, but there were anyway other channels for contact with my friends, and I let the matter rest there for a while.

If you've read earlier instalments in this column, you will know that I've operated bsdly.net with an email service since 2004 and a handful of other domains from some years before the bsdly.net domain was set up, sharing to varying extents the same infrastructure. One feature of the bsdly.net and associated domains setup is that in 2006, we started publishing a list of known bad addresses in our domains, that we used as spamtrap addresses as well as publising the blacklist that the greytrapping generates.

Over the years the list of spamtrap addresses -- harvested almost exclusively from records in our logs and greylists of apparent bounces of messages sent with forged From: addresses in our domains - has grown to a total of 29757 spamtraps, a full 7387 in the bsdly.net domain alone. At the time I'm writing this 31162 hosts have attempted to deliver mail to one of those spamtrap addresses during the last 24 hours. The exact numbers will likely change by the time you read this -- blacklisted addresses expire 24 hours after last contact, and new spamtrap addresses generally turn up a few more each week. With some simple scriptery, we pick them out of logs and greylists as they appear, and sometimes entire days pass without new candidates appearing. For a more general overview of how I run the blacklist, see this post from 2013.

In addition to the spamtrap addresses, the bsdly.net domain has some valid addresses including my own, and I've set up a few addresses for specific purposes (actually aliases), mainly set up so I can filter them into relevant mailboxes at the receiving end. Despite all our efforts to stop spam, occasionally spam is delivered to those aliases too (see eg the ethics of running the traplist page for some amusing examples).

Then this morning a piece of possibly well intended but actually quite clearly unwanted commercial email turned up, addressed to one of those aliases. For no actually good reason, I decided to send an answer to the message, telling them that whoever sold them the address list they were using were ripping them off.

That message bounced, and it turns out that the domain was hosted at Google.

Reading that bounce message is quite interesting, because if you read the page they link to, it looks very much like whoever runs Google Mail doesn't know what a joejob is.

The page, which again is intended mainly for Google's own customers, specifies that you should set up SPF and DKIM for domains. But looking at the headers, the message they reject passes both those criteria:

Received-SPF: pass (google.com: domain of peter@bsdly.net designates 2001:16d8:ff00:1a9::2 as permitted sender) client-ip=2001:16d8:ff00:1a9::2;
Authentication-Results: mx.google.com;
dkim=pass (test mode) header.i=@bsdly.net;
spf=pass (google.com: domain of peter@bsdly.net designates 2001:16d8:ff00:1a9::2 as permitted sender) smtp.mailfrom=peter@bsdly.net
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bsdly.net; s=x;
h=Content-Transfer-Encoding:Content-Type:In-Reply-To:MIME-Version:Date:Message-ID:From:References:To:Subject; bh=OonsF8beQz17wcKmu+EJl34N5bW6uUouWw4JVE5FJV8=;
b=hGgolFeqxOOD/UdGXbsrbwf8WuMoe1vCnYJSTo5M9W2k2yy7wtpkMZOmwkEqZR0XQyj6qoCSriC6Hjh0WxWuMWv5BDZPkOEE3Wuag9+KuNGd7RL51BFcltcfyepBVLxY8aeJrjRXLjXS11TIyWenpMbtAf1yiNPKT1weIX3IYSw=;

Then for reasons known only to themselves, or most likely due to the weight they assign to some unknown data source, they reject the message anyway.

We do not know what that data source is. But with more than seven thousand bogus addresses that have generated bounces we've registered it's likely that the number of invalid bsdly.net From: addresses Google's systems has seen is far larger than the number of valid ones. The actual number of bogus addresses is likely higher, though: in the early days the collection process had enough manual steps that we're bound to have missed some. Valid bsdly.net addresses that do not eventually resolve to a mailbox I read are rare if not entirely non-existent. But the 'bulk mail' classification is bizarre if you even consider checking Received: headers.

The reason Google's systems most likely has seen more bogus bsdly.net From: addresses than valid ones is that by historical accident faking sender email addresses in SMTP dialogues is trivial.

Anecdotal evidence indicates that if a domain exists it will sooner or later be used in the from: field of some spam campaign where the messages originate somewhere else completely, and for that very reason the SPF and DKIM mechanisms were specified. I find both mechanisms slightly painful and inelegant, but used in their proper context, they do have their uses.

For the domains I've administered, we started seeing log entries, and in the cases where the addresses were actually deliverable, actual bounce messages for messages that definitely did not originate at our site and never went through our infrastructure a long time before bsdly.net was created. We didn't even think about recording those addresses until a practical use for them suddenly appeared with the greytrapping feature in OpenBSD 3.3 in 2003.

A little while after upgrading the relevant systems to OpenBSD 3.3, we had a functional greytrapping system going, at some point before the 2007 blog post I started publishing the generated blacklist. The rest is, well, what got us to where we are today.

From the data we see here, mail sent with faked sender addresses happens continuously and most likely to all domains, sooner or later. Joejobs that actually hit deliverable addresses happen too. Raw data from a campaign in late 2014 that used my main address as the purported sender is preserved here, collected with a mind to writing an article about the incident and comparing to a similar batch from 2008. That article could still be written at some point, and in the meantime the messages and specifically their headers are worth looking into if you're a bit like me. (That is, if you get some enjoyment out of such things as discovering the mindbogglingly bizarre and plain wrong mail configurations some people have apparently chosen to live with. And unless your base64 sightreading skills are far better than mine, some of the messages may require a bit of massaging with MIME tools to extract text you can actually read.)

Anyone who runs a mail service and bothers even occasionally to read mail server logs will know that joejobs and spam campaigns with fake and undeliverable return addresses happen all the time. If Google's mail admins are not aware of that fact, well, I'll stop right there and refuse to believe that they can be that incompentent.

The question then becomes, why are they doing this? Are they giving other independent operators the same treatment? If this is part of some kind of intimidation campaign (think "sign up for our service and we'll get your mail delivered, but if you don't, delivering to domains that we host becomes your problem"). I would think a campaign of intimidation would be a less than useful strategy when there are alread antitrust probes underway, these things can change direction as discoveries dictate.

Normally I would put oddities like the ones I saw in this case down to a silly misconfiguration, some combination of incompetence and arrogance and, quite possibly, some out of control automation thrown in. But here we are seeing clearly wrong behavior from a company that prides itself in hiring only the smartest people they can find. That doesn't totally rule out incompetence or plain bad luck, but it makes for a very strange episode. (And lest we forget, here is some data on a previous episode involving a large US corporation, spam and general silliness. (Now with the external link fixed via the wayback machine.))

One other interesting question is whether other operators, big and small behave in any similar ways. If you have seen phenomena like this involving Google or other operators, I would like to hear from you by email (easily found in this article) or in comments.

Update 2016-05-03: With Google silent on all aspects of this story, it is not possible pinpoint whether the public whining or something else that made a difference, but today a message aimed at one of those troublesome domains made it through to its intended recipient.

Much like the mysteriously disappeared messages, my logs show a "Message accepted for delivery" response from the Google servers, but unlike the earlier messages, this actually appeared in the intended inbox. In the meantime one of one of the useful bits of feedback I got on the article was that my IPv6 reverse DNS was in fact lacking. Today I fixed that, or at least made a plausible attempt to (you're very welcome to check with tools available to you). And for possibly unrelated reasons, my test message made it through all the way to the intended recipient's mailbox.

[Opinions offered here are my own and may or may not reflect the views of my employer.]

Update 2016-11-21: Since this piece was originally written, we've seen several episodes of Gmail.com or other Google domains disappearing or bouncing mail from bsdly.net, and at some point I got around to setting up proper DMARC records as well.

In a separate development, I also set up the script that generates the blacklist for export to send a copy of its output to my gmail.com addresses. This has lead to a few fascinating bounces, all of them archived here in order to preserve a record of incidents and developments

It should be no surprise that even the SPF-DKIM-DMARC trinity is not actually enough to avoid bounces and disappearances, driving Google to come up with a separate DNS TXT record of their own, the google-site-verification record, which contains a key they will generate for your domain if you manage to find the correct parts of their website.

Update 2017-02-12 - 2017-02-13: At semi-random intervals since this piece was originally written, Gmail has had a number of short periods of bouncing my hourly spamtrap reports sent to myself at bsdly.net with a Cc: to my gmail account.

At most times these episodes last for a couple of hours only, possibly when a human intervenes to correct the situation. However, this weekend we have seen more of this nonsense than usual: Gmail started bouncing my hourly reports on Saturday evening CET, with the first bounce for the 2017-02-11 21:10 report, keeping up the bounces through the 2017-02-12 05:10 report.

Then after a brief respite the bounces resumed with the 2017-02-12 08:10 report and have persisted, bouncing every hourly message (except the 2017-02-13 03:10, 04:10 and 13.10 messages) and with non-delivery, with the last bounce received now for the 2017-02-13 21:10 report.  I archive all bounces as they arrive mod whatever time I have on my hands in the googlefail archive

Update 2020-10-21: Recent events have lead me to suspect that Google may be relying very much, thank you, on SPF information, and that their systems may have been seeing timeouts on SPF lookups due to my BIND not switching to TCP answers soon enough. One other incident is almost certain to have been caused by a lack of correct reverse lookup for the sending system's IPv6 address. 


I will be giving a PF tutorial at BSDCan 2016, and I welcome your questions now that I'm revising the material for that session. See this blog post for some ideas (note that I'm only giving the PF tutorial this time around).

by Peter N. M. Hansteen (noreply@blogger.com) atOctober 21, 2020 11:24 AM

October 20, 2020

Petter Reinholdtsen

Buster based Bokmål edition of Debian Administrator's Handbook

I am happy to report that we finally made it! Norwegian Bokmål became the first translation published on paper of the new Buster based edition of "The Debian Administrator's Handbook". The print proof reading copy arrived some days ago, and it looked good, so now the book is approved for general distribution. This updated paperback edition is available from lulu.com. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, and can also be read online.

I am very happy to wrap up this Creative Common licensed project, which concludes several months of work by several volunteers. The number of Linux related books published in Norwegian are few, and I really hope this one will gain many readers, as it is packed with deep knowledge on Linux and the Debian ecosystem. The book will be available for various Internet book stores like Amazon and Barnes & Noble soon, but I recommend buying "Håndbok for Debian-administratoren" directly from the source at Lulu.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

October 20, 2020 04:35 PM

October 01, 2020

Ole Aamot GNOME Development Blog

GNOME Internet Radio Locator 3.4.0 with C-SPAN for Fedora Core 32

GNOME Internet Radio Locator 3.4.0 features updated language translations, new, improved map marker palette and now as well as C-SPAN from United States Supreme Court, Congress and Senate, also includes streaming radio from Washington, United States of America; WAMU/NPR, London, United Kingdom; BBC World Service, Berlin, Germany; Radio Eins, Norway; NRK, and Paris, France; France Inter/Info/Culture, as well as 119 other radio stations from around the world with live audio streaming implemented through GStreamer.  The project lives on www.gnomeradio.org and Fedora 32 RPM packages for version 3.4.0 of GNOME Internet Radio Locator are now also available:

gnome-internet-radio-locator.spec

gnome-internet-radio-locator-3.4.0-1.fc32.src.rpm

gnome-internet-radio-locator-3.4.0-1.fc32.x86_64.rpm

To install GNOME Internet Radio Locator 3.4.0 on Fedora Core 32 in Terminal:

sudo dnf install http://www.gnomeradio.org/~ole/fedora/RPMS/x86_64/gnome-internet-radio-locator-3.4.0-1.fc32.x86_64.rpm

by oleaamot atOctober 01, 2020 12:00 PM

September 27, 2020

Nicolai Langfeldt

New Linux on old hardware

Since our family relies on the servers in our basement for email, music, movies, and so on it's been bothering me that we have no extra hardware in case something goes wrong.

I've been given two old Dell machines, I want to use one for stuff and one as spares in case something breaks.

Both needed reinstalling: I like Ubuntu and made myself a memory stick with Ubutnus usb-creator software.  The machines never managed to boot of those.  The Ubuntu server ISOs are too large to fit on a CD. Found ubuntus netboot ISOs for 18.04 which fits a CD ten times over.  Again they didn't boot from the memory stick nor off a CD with various error messages or just passing it by and booting off the old OS on the harddrive.

After a while I recalled "unetbootin" - a old, but still updated - tool to make memory sticks from ISO images. It currently lives here: https://unetbootin.github.io/

I had to repartition my memory stick (cfdisk /dev/sdc in my case) and make a msdos filesystem (mkfs.msdos /dev/sdc1) and mount it.

Then unetbootin could make it bootable and indeed my quite old hardware was able to see my memory stick as a hard drive and boot for it with no further issues.

Win of the day.


by nicolai (noreply@blogger.com) atSeptember 27, 2020 11:14 AM

September 22, 2020

Dag-Erling Smørgrav

wtf, zsh

wtf, zsh

% uname -sr
FreeBSD 12.1-RELEASE-p10
% for sh in sh csh bash zsh ; do printf "%-8s" $sh ; $sh -c 'echo \\x21' ; done 
sh      \x21
csh     \x21
bash    \x21
zsh     !
% cowsay wtf, zsh       
 __________ 
< wtf, zsh >
 ---------- 
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

I mean. Bruh. I know it’s intentional & documented & can be turned off, but every other shell defaults to POSIX semantics…

BTW:

% ln -s =zsh /tmp/sh
% /tmp/sh -c 'echo \x21'
\x21

by Dag-Erling Smørgrav atSeptember 22, 2020 01:11 PM

September 18, 2020

NUUG events video archive

Introduksjon til bygging av Debianpakker

September 18, 2020 05:15 AM

September 06, 2020

Peter Hansteen (That Grumpy BSD Guy)

The 'sextortion' Scams: The Numbers Show That What We Have Is A Failure Of Education

Subject: Your account was under attack! Change your credentials!
From: Melissa <chenbin@jw-hw.com>
To: adnan@bsdly.net

Hello!

I am a hacker who has access to your operating system.

I also have full access to your account.

I've been watching you for a few months now.

The fact is that you were infected with malware through an adult site that you visited.


Did you receive a message phrased more or less like that, which then went on to say that they have a video of you performing an embarrasing activity while visiting an "adult" site, which they will send to all your contacts unless you buy Bitcoin and send to a specific ID?

The good news is that the video does not exist. I know this, because neither does our friend Adnan here. Despite that fact, whoever operates the account presenting as Melissa appears to believe that Adnan is indeed a person who can be blackmailed. You're probably safe for now. I will provide more detail later in the article, but first a few dos and don'ts:

The important point is that you are or were about to be the victim of what I consider a very obvious scam, and for no good or even nearly valid reason. You should not need to become the next victim.

And this, dear policy makers and tech heads in general is our problem: A large subset of the general public simply do not know their way around the digital world we created for them to live in. We need to do better.

In that context I find it quite disturbing that people who should know better, such as the Norwegian Center for Information Security, in a recently issued report (also see Digi.no's article (both in Norwegian only, sorry)) predict that the sextortion attacks will become "more sophisticated and credible". Then again at some level they may technically be right, since this kind of activity starts out with a net negative credibility score.

A case in point: Some versions of the scam messages I have been able to study went as far as to claim that the perpetrators had not only had taken control of the target's device, they had even sent that very email message from there. That never happened, of course, and it would have been easy for anybody who had learned to interpret Received: headers to verify that the message was in fact sent from the great elsewhere. Unfortunately the skill of reading email headers is rarely, if ever, taught to ordinary users.

The fact that people do not understand those -- to techies -- obvious facts is a fairly central and burdening problem, and again we need to do better.

Now let me explain. Things get incrementally more technical from here, so if you came here only for the admonitions or practical advice and have no use for the background, feel free to wander off.

I know the message I quoted at the beginning here is a scam because I run my own mail service, and looking at just the logs there just now I see that since the last logs archiving rotation early Saturday morning, more than 3000 attempts at delivery of messages like the one for Adnan happened, aimed at approximately 200 non-existent recipients before my logs tell me they finally tried to deliver one to my primary contact address, never actually landing in any inboxes.

One of the techniques we use to weed out unwanted incoming mail is to maintain and publish a list of known bad and invalid email addresses in our domains. These known bad addresses have then in ways unknown (at least not known to us in any detail) made it into the list of addresses sold to spammers, and we at the receiving end can use the bad addresses as triggers to block traffic from the sending hosts (If you are interested, you can read elsewhere on this blog for details on how we do this, look for tags such as greylisting, greytrapping or antispam).

If it was not clear earlier, those numbers tell us something about the messages at hand. It should be fairly obvious that compromising videos of non-existent users could not, in fact, exist.

Looking back in archived logs from the same system I see that a variant of this message started appearing in late January 2018. The specifics of that message sequence will be interesting to revisit when the full history of sextortion (I still do not like the term, but my preferred alterantive is at risk of being filtered out by polite society-serving robots) will be written, but let us rather turn to the more recent data, as in data recorded earlier this week.

Mainly because I found the media coverage of the "sextortion" phenomenon generally uninformed and somewhat annoying, I had been been mulling writing an article about it for a while, but I was still looking for a productive angle when on Wednesday evening I noticed a slight swelling in the number of greytrapped hosts. A glance at my spamd log seemed to indicate that at least one of the delivery attempts had a line like

       I am a hacker who has access to your operating system.

Which was actually just what I had been pondering writing about.  

So I set about for a little research. I greped (searched) in my yet-unrotated spamd logs for the word hacker, which yielded lots of lines of the type

Feb 22 04:04:35 skapet spamd[8716]: 89.22.104.47: Body: I am a hacker who has access to your operating system.
Feb 22 04:17:04 skapet spamd[8716]: 5.79.23.92: Body: I am a hacker who has access to your operating system.
Feb 22 04:34:03 skapet spamd[8716]: 153.120.146.199: Body: I am a hacker who has access to your operating system.
Feb 22 04:40:30 skapet spamd[8716]: 45.181.93.45: Body: I am a hacker who has access to your operating system.
Feb 22 04:55:04 skapet spamd[8716]: 93.186.247.18: Body: I am a hacker who has access to your operating system.
Feb 22 05:09:39 skapet spamd[8716]: 123.51.190.154: Body: I am a hacker who has access to your operating system.
Feb 22 05:13:22 skapet spamd[8716]: 212.52.131.4: Body: I am a hacker who has access to your operating system.
Feb 22 05:38:02 skapet spamd[8716]: 5.79.23.92: Body: I am a hacker who has access to your operating system.
Feb 22 05:44:39 skapet spamd[8716]: 123.51.190.154: Body: I am a hacker who has access to your operating system.
Feb 22 06:00:30 skapet spamd[8716]: 45.181.93.45: Body: I am a hacker who has access to your operating system.

(the full result has been preserved here). Extracting the source addresses gave a list of 198 IP addresses (preserved here).

Extracting the To: addresses from the fuller listing yielded 192 unique email addresses (preserved here). Looking at the extracted target email addresses yielded some interesting insights:

1) The target email addresses were not exclusively in the domains my system actually serves, and

2) Some ways down the list of target email addresses, my own primary address turns up.

Of course 2) made me look a little closer, and only one IP address in the extract had tried delivery to my email address.

A further grep on that IP address turned up this result.

There are really no surprises to be had here, at least to a large subset of my supposed readers. The sender had first tried to deliver one of the sexstortion video messages to one of the by now more than quarter million spamtraps, and its IP address was still blacklisted by the time it finally tried delivery to a potentially deliverable address.

Doing a few spot checks on the sender IP addresses in recent and less recent logs it looks like the only two things could be mildly exciting about those messages. One is the degree the content was intended to be embarrasing to the recipient. The other is a possible indicator of the campaign's success: Looking back through the logs for the approximate year of known activity, it even looks like the campaign became multilingual, while retaining the word "hacker" in most if (possibly) not all language versions.

Other than that it is almost depressing how normal the sextortion campaign is: It uses the same spam sending infrastructure and the same low quality target address lists (the ones containing some subset of my spamtrap addresses) as the regular and likely not too successful spammers of every stripe. Nothing else stands out.

And as returning readers will notice, the logs indicate that the spambots are naive enough in their SMTP code that they frequently mistake spamd's delaying tactics for a slow, but functional open SMTP relay.

Now to recap the main points:
Whatever evolves next out of these rather hamfisted attempts at blackmail is unlikely to ever achieve any level of sophistication worthy of the name.

We would all be much better served by focusing on real threats such as, but not limited to, credential harvesting via deceptive content delivered over advertising networks, which themselves are a major headache security- and privacy-wise, or even harvesting via phishing email.

Both of the latter have been known to lead to successful compromise with data exfiltration and identity theft as possible-to-probable results.

To a large extent the damage could could have been significantly limited had the general public been taught sensible security practices such as using multi-factor authentication or at least actually good passwords combined with securely coded password management applications, and insisting that services encourage such practices.

Yes, I know you have been dying to ask: What is the thing about Adnan? According to my activity log, the address adnan@bsdly.net was added as a spamtrap on July 8th, 2017 after somebot had tried to log on as the user adnan, a user name not seen before at bsdly.net,

Jul  8 09:40:34 skapet sshd[34794]: Failed password for invalid user adnan from 118.217.181.8 port 41091 ssh2

apparently from a network in South Korea.

As always, there is more log material available to competent practitioners and researchers with a valid research agenda. Please contact me if you are such a person who could use the collected data productively.


Update 2020-02-29: For completeness and because I felt that an unsophisticated attack like the present one deserves a thorough if unsophisticated analysis, I decided to take a look at the log data for the entire 7 day period, post-rotation.

So here comes some armchair analysis, using only the tools you will find in the base system of your OpenBSD machine or any other running a sensibly stocked unix-like operating systen. We start with finding the total number of delivery attempts logged where we have the body text 'am a hacker' (this would show up only after a sender has been blacklisted, so the gross number actual delivery attempts will likely be a tad higher), with the command

zgrep "am a hacker" /var/log/spamd.0.gz | awk '{print $6}' | wc -l

which tells us the number is 3372.

Next up we use a variation of the same command to extract the source IP addresses of the log entries that contain the string 'am a hacker', sort the result while also removing duplicates and store the end result in an environment variable called lastweek:

 export lastweek=`zgrep "am a hacker" /var/log/spamd.0.gz | awk '{print $6}' | tr -d ':' | sort -u `

With our list of IP addresses tucked away in the environment variable go on to: For each IP address in our lastweek set, extract all log entries and store the result (still in crude sort order by IP address), in the file 2020-02-29_i_am_hacker.raw.txt:

 for foo in $lastweek ; do zgrep $foo /var/log/spamd.0.gz | tee -a 2020-02-09_i_am_hacker.raw.txt ; done

For reference I kept the list of unique IP addresses (now totalling 231) around too.

Next, we are interested in extracting the target email addresses, so the command

grep "To:" 2020-02-29_i_am_hacker.raw.txt | awk '{print substr($0,index($0,$8))}' | sort -u

finds the lines in our original extract containing "To:", and gives us the list of target addresses the sources in our data set tried to deliver mail to.

The result is preserved as 2020-02-29_i_am_hacker.raw_targets.txt, a total of 236 addresses, mostly but not all in domains we actually host here. One surprise was that among the target addresses one actually invalid address turned up that was not at that time yet a spamtrap. See the end of the activity log for details (it also turned out to be the last SMTP entry in that log for 2020-02-29).

This little round of armchair analysis on the static data set confirms the conclusions from the original article: Apart from the possibly titillating aspects of the "adult" web site mentions and the attempt at playing on the target's potential shamefulness over specific actions, as spam campaigns go, this one is ordinary to the point of being a bit boring.

There may well be other actors preying on higher-value targets through their online clumsiness and known peculiarities of taste in an actually targeted fashion, but this is not it.

A final note on tools: In this article, like all previous entries, I have exclusively used the tools you will find in the OpenBSD (or other sensibly put together unixlike operating system) base system or at a stretch as an easily available package.

For the simpler, preliminary investigations and poking around like we have done here, the basic tools in the base system are fine. But if you will be performing log analysis at scale or with any regularity for purposes that influences your career path, I would encourage you to look into setting up a proper, purpose-built log analysis system.

Several good options, open source and otherwise, are available. I will not recommend or endorse any specific one, but when you find one that fits your needs and working style you will find that after the initial setup and learning period it will save you significant time.

As per my practice, only material directly relevant to the article itself has been published via the links. If you are a professional practitioner or researcher with who can state a valid reason to need access to unpublished material, please let me know and we will discuss your project.

Update 2020-03-02: I knew I had some early samples of messages that did make it to an inbox near me squirreled away somewhere, and after a bit of rummaging I found them, stored here (note the directory name, it seemed so obvious and transparent even back then). It appears that the oldest intact messages I have are from December 2018. I am sure earlier examples can be found if we look a littler harder.

Update 2020-03-17: A fresh example turned up this morning, addressed to (of all things) the postmaster account of one of our associated .no domains, written in Norwegian (and apparently generated with Microsoft Office software). The preserved message can be downloaded here

Update 2020-05-10: While rummaging about (aka 'researching') for something else I noticed that spamd logs were showing delivery attempts for messages with the subject "High level of danger. Your account was under attack."  So out of idle curiosity on an early Sunday afternoon, I did the following:

$ export muggles=`grep " High level of danger." /var/log/spamd | awk '{print $6}' | tr -d ':' | sort -u`
$ for foo in $muggles; do grep $foo /var/log/spamd >>20200510-muggles ; done


and the result is preserved for your entertainment and/or enlightenment here. Not much to see, really other than that they sent the message in two language varieties, and to a small subset of our imaginary friends.

Update 2020-08-13: Here is another snapshot of activity from August 12 and 13: this file preserves the activity of 19 different hosts, and as we can see that since they targeted our imaginary friends first, it is unlikely they reached any inboxes here. Some of these campaigns may have managed to reach users elsewhere, though

Update 2020-09-06: Occasionally these messages manage to hit a mailbox here. Apparently enough Norwegians fall for these scams that Norwegian language versions (not terribly well worded) get aimed at users here. This example, aimed at what has only ever been an email alias made it here, slipping through by a stroke of luck during a time that IP address was briefly not in the spamd-greytrap list here, as can be seen from this log excerpt. It is also worth noting that an identically phrased message was sent from another IP address to mailer-daemon@ for one of the domains we run here.

by Peter N. M. Hansteen (noreply@blogger.com) atSeptember 06, 2020 11:28 AM

August 21, 2020

Dag-Erling Smørgrav

Netlink, auditing, and counting bytes

I’ve been messing around with Linux auditing lately, because of reasons, and ended up having to replicate most of libaudit, because of other reasons, and in the process I found bugs in both the kernel and userspace parts of the Linux audit subsystem.

Let us start with what Netlink is, for readers who aren’t very familiar with Linux: it is a mechanism for communicating directly with kernel subsystems using the BSD socket API, rather than by opening device nodes or files in a synthetic filesystem such as /proc. It has pros and cons, but mostly pros, especially as a replacement for ioctl(2), since Netlink sockets are buffered, can be poll(2)ed, and can more easily accommodate variable-length messages and partial reads.

Note: all links to source code in this post point to the versions used in Ubuntu 18.04 as of 2020-08-21: kernel 5.4, userspace 2.8.2.

Netlink messages start with a 16-byte header which looks like this: (source, man page)

struct nlmsghdr {
	__u32		nlmsg_len;	/* Length of message including header */
	__u16		nlmsg_type;	/* Message content */
	__u16		nlmsg_flags;	/* Additional flags */
	__u32		nlmsg_seq;	/* Sequence number */
	__u32		nlmsg_pid;	/* Sending process port ID */
};

The same header also provides a few macros to help populate and interpret Netlink messages: (source, man page)

#define NLMSG_ALIGNTO	4U
#define NLMSG_ALIGN(len) ( ((len)+NLMSG_ALIGNTO-1) & ~(NLMSG_ALIGNTO-1) )
#define NLMSG_HDRLEN	 ((int) NLMSG_ALIGN(sizeof(struct nlmsghdr)))
#define NLMSG_LENGTH(len) ((len) + NLMSG_HDRLEN)
#define NLMSG_SPACE(len) NLMSG_ALIGN(NLMSG_LENGTH(len))
#define NLMSG_DATA(nlh)  ((void*)(((char*)nlh) + NLMSG_LENGTH(0)))
#define NLMSG_NEXT(nlh,len)	 ((len) -= NLMSG_ALIGN((nlh)->nlmsg_len), \
				  (struct nlmsghdr*)(((char*)(nlh)) + NLMSG_ALIGN((nlh)->nlmsg_len)))
#define NLMSG_OK(nlh,len) ((len) >= (int)sizeof(struct nlmsghdr) && \
			   (nlh)->nlmsg_len >= sizeof(struct nlmsghdr) && \
			   (nlh)->nlmsg_len <= (len))
#define NLMSG_PAYLOAD(nlh,len) ((nlh)->nlmsg_len - NLMSG_SPACE((len)))

Going by these definitions and the documentation, it is clear that the length field of the message header reflects the total length of the message, header included. What is somewhat less clear is that Netlink messages are supposed to be padded out to a multiple of four bytes before transmission or storage.

The Linux audit subsystem not only breaks these rules, but does not even agree with itself on precisely how to break them.

The userspace tools (auditctl(8), auditd(8)…) all use libaudit to communicate with the kernel audit subsystem. When passing a message of length size to the kernel, libaudit copies the payload into a large pre-zeroed buffer, sets the type, flags, and sequence number fields to the appropriate values, sets the pid field to zero (which is probably a bad idea but, strictly speaking, permitted), and finally sets the length field to NLMSG_SPACE(size), which evaluates to sizeof(struct nlmsghdr) + size rounded up to a multiple of four. It then writes that exact number of bytes to the socket.

Bug #1: The length field should not be rounded up; the purpose of the NLMSG_SPACE() and NLMSG_NEXT() macros is to ensure proper alignment of subsequent message headers when multiple messages are stored or transmitted consecutively. The length field should be computed using NLMSG_LENGTH(), which simply adds the length of the header to its single argument.

Note: to my understanding, Netlink supports sending multiple messages in a single send / receive provided that they are correctly aligned, that they all have the NLM_F_MULTI flag set, and that the last message in the sequence is a zero-length message of type NLMSG_DONE. The audit subsystem does not use this feature.

Moving on: NETLINK_AUDIT messages essentially fall into one of four categories:

  1. Requests from userspace; for instance, an AUDIT_GET message which requests the current status of the audit subsystem, an AUDIT_SET message which changes parameters, or an AUDIT_LIST_RULES message which requests a list of currently active auditing rules.
  2. Responses from the kernel; these usually have the same type as the request. For instance, the kernel will respond to an AUDIT_GET request with a message of the same type containing a struct audit_status, and to an AUDIT_LIST_RULES request with a sequence of messages of the same type each containing a single struct audit_rule_data.
  3. Errors and acknowledgements. These use standard Netlink message types: NLMSG_ERROR in response to an invalid request (or a valid request with the NLM_F_ACK flag set), or NLMSG_DONE at the end of a multi-part response.
  4. Audit data. Every event that matches an auditing rule will trigger a series of messages with varying types: usually one which describes the system call that triggered the event, one each for every file or directory affected by the call, one or more describing the process, etc. Each message consists of a header of the form audit(timestamp:serial):� which uniquely identifies the event, followed by a space-separated list of key-value pairs. The final message has the type AUDIT_EOE and has the same header, trailing space included, but no data.

The kernel pads responses, errors and acknowledgements, but does not include that padding in the length reported in the message header. So far, so good. However…

Bug #2: Audit data messages are sent from the kernel without padding.

This is not critical, but it does mean that an implementation that batches up incoming messages and stores them consecutively must take extra care to keep them properly aligned.

Bug #3: The length field on audit data messages does not include the length of the header.

This is jaw-dropping. It is so fundamentally wrong. It means that anyone who wants to talk to the audit subsystem using their own code instead of libaudit will have to add a workaround to the Netlink layer of their stack to either fix or ignore the error, and apply that workaround only for certain message types.

How has this gone unnoticed? Well, libaudit doesn’t do much input validation. It relies on the NLMSG_OK() macro, which checks only three things:

  1. That the length of the buffer (as returned by recvfrom(2), for instance) is no less than the length of a Netlink message header.
  2. That the length field in the message header is no less than the length of a Netlink message header.
  3. That the length field in the message header is less than or equal to the length of the buffer.

Since every audit data message, even the empty AUDIT_EOE message, begins with a timestamp and serial number, the length of the payload is never less than 25-30 bytes, and NLMSG_OK() is always satisfied. And since the audit subsystem never sends multiple messages in a single send / receive, it does not matter that NLMSG_NEXT() will be off by 16 bytes.

Consumers of libaudit don’t notice either because they never look at the header; libaudit wraps the message in its own struct audit_reply with its own length and type fields and pointers of the appropriate types for messages that contain binary data (this is a bad idea for entirely different reasons which we won’t go into here). The only case in which the caller needs to know the length of the message is for audit events, when the length field just happens to be the length of the payload, just like the caller expects.

The odds of these bugs getting fixed is approximately zero, because existing applications will break in interesting ways if the kernel starts setting the length field correctly.

Turing wept.

THIS IS WHY WE CAN’T HAVE NICE THINGS

by Dag-Erling Smørgrav atAugust 21, 2020 04:33 PM

July 29, 2020

Ole Aamot GNOME Development Blog

Record Live Audio as Ogg Vorbis in GNOME Gingerblue 0.2.0

Today I released GNOME Gingerblue version 0.2.0 with the basic new features:

I began work on GNOME Gingerblue on July 4th, 2018, two years ago and I am going to spend the next four years to complete it for GNOME 4.

GNOME Gingerblue will be a Free Software program for musicians who would compose, record and share original music to the Internet from the GNOME Desktop.

The project isn’t yet ready for distribution with GNOME 3 and the GUI and features such as meta tagging and Internet uploads must be implemented.

The GNOME release team complained at the early release cycle in July and call the project empty, but I estimate it will take at least 4 years to complete 4.0.0 in reasonable time for GNOME 4 to be released between 2020 and 2026.

The Internet community can’t have Free Music without Free Recording Software for GNOME, but GNOME 4 isn’t built in 1 day.

I am trying to get gtk_record_button_new() into GTK+ 4.0.

I hope to work more on the first major release of GNOME Gingerblue during Christmas 2020 and perhaps get meta tags working as a new feature in 1.0.0.

Meanwhile you can visit the GNOME Gingerblue project domain www.gingerblue.org with the GNOME wiki page, test the initial GNOME Gingerblue 0.2.0 release that writes and records Song files from the microphone in $HOME/Music/ with Wizard GUI and XML parsing from August 2018, or spend money on physical goods such as the Norsk Kombucha GingerBlue soda or the Ngs Ginger Blue 15.6″ laptop bag.

by oleaamot atJuly 29, 2020 06:00 PM

May 19, 2020

NUUG news

NUUG bygger bokskanner - arbeidet er i gang

Det finnes millioner av bøker der vernetiden er utløpt. Noen av dem er norske bøker, og endel av dem finnes ikke tilgjengelig digitalt. For å forsøke å gjøre noe med det siste, har NUUG vedtatt å få bygget en bokskanner. Utformingen er basert på en enkel variant i plast (byggeinstrukser), men vil bli laget i aluminium for lengre levetid.

Oppdraget med å bygge scanneren er gitt til våre venner i Oslo Sveisemek, som er godt igang med arbeidet. Her ser du en skisse over konstruksjonen:

Konstruksjonsskisse

Grunnrammen er montert, men det gjenstår fortsatt en god del:

Montering av grunnrammen

Tanken er at medlemmer og andre skal kunne låne eller leie bokskanner ved behov, og de av oss som er interessert kan gå igang med å digitalisere bøker med OCR og pågangsmot. Ta kontakt med aktive (at) nuug.no hvis dette er noe for deg, eller stikk innom #nuug.

(Fotograf er Jonny Birkelund)

May 19, 2020 06:00 PM

January 07, 2020

NUUG news

Noark tjenestegrensesnitt seminar mandag 27. januar 2019 kl. 08:30-11:00

Mandag 27. januar 2019 kl. 08:30-11:00 arrangerer OsloMet og NUUG en frokostseminar om Noark 5 tjenestegrensesnitt. Vi opplever at det er en del misforståelser rundt tjenestegrensesnittet og vi ønsker med dette å rydde opp i disse og sette fokus på viktigheten med standardisering.

Arkivene må ta sin plass i et datadrevet verden og standardisering og metadata er mer viktig nå enn noensinne. Ønsker du vite mer om hvordan standardisert dokumentasjonsforvaltning kan hjelpe deg unngå leverandørinnlåsing? Ønsker du å unngå opprettelsen av nye digitale siloer? Ønsker du på sikt å redusere arkiveringskostnadene? Bli med og finn ut mer hva et standardisert fremtidsrettet dokumentasjonsforvaltnings-API kan gjøre for deg.

Det er gratis å delta (frokost er på huset), men begrenset med plasser. Seminaret strømmes på nettet og opptak legges ut i etterkant.

Mer info og påmelding finnes på NUUGs arrangementsside.

January 07, 2020 12:30 PM

December 15, 2019

NUUG Foundation

Reisestipend - 2020

NUUG Foundation utlyser reisestipender for 2020. Søknader kan sendes inn til enhver tid.

December 15, 2019 09:46 AM

December 08, 2019

Nicolai Langfeldt

Bluray with menus on Linux - on Ubuntu

For the longest time it was impossible to play BluRay disks on Linux due to the lack of players that could do it.  VLC has been the most capable video player on Linux and some time ago they managed it.

I run Ubuntu at home.  I can easily install VLC but some parts were missing to get it working.
  1. Be root: sudo -i
  2. libaacs decodes Blurays: apt-get install libaacs0
  3. BluRays or at least VLC need Java 8: apt-get install openjdk-8-jre
  4. Ubuntu and VLC does not agree on the right directory name:  cd /usr/lib/jvm/
  5. Link the right one: ln -s java-1.8.0-openjdk-amd64 java-8-openjdk
  6. This library implements BD-J menus apt-get install libbluray-bdj
Now insert a BluRay disk and play it: vlc bluray://

It should start up with menus. Use arrow keys to navigate and Enter to choose.

by nicolai (noreply@blogger.com) atDecember 08, 2019 09:51 PM

December 02, 2018

NUUG Foundation

Reisestipend - 2019

NUUG Foundation utlyser reisestipender for 2019. Søknader kan sendes inn til enhver tid.

December 02, 2018 04:10 PM

October 23, 2017

Espen Braastad

ZFS NAS using CentOS 7 from tmpfs

Following up on the CentOS 7 root filesystem on tmpfs post, here comes a guide on how to run a ZFS enabled CentOS 7 NAS server (with the operating system) from tmpfs.

Hardware

Preparing the build environment

The disk image is built in macOS using Packer and VirtualBox. Virtualbox is installed using the appropriate platform package that is downloaded from their website, and Packer is installed using brew:

$ brew install packer

Building the disk image

Three files are needed in order to build the disk image; a Packer template file, an Anaconda kickstart file and a shell script that is used to configure the disk image after installation. The following files can be used as examples:

Create some directories:

$ mkdir ~work/centos-7-zfs/
$ mkdir ~work/centos-7-zfs/http/
$ mkdir ~work/centos-7-zfs/scripts/

Copy the files to these directories:

$ cp template.json ~work/centos-7-zfs/
$ cp ks.cfg ~work/centos-7-zfs/http/
$ cp provision.sh ~work/centos-7-zfs/scripts/

Modify each of the files to fit your environment.

Start the build process using Packer:

$ cd ~work/centos-7-zfs/
$ packer build template.json

This will download the CentOS 7 ISO file, start an HTTP server to serve the kickstart file and start a virtual machine using Virtualbox:

Packer installer screenshot

The virtual machine will boot into Anaconda and run through the installation process as specified in the kickstart file:

Anaconda installer screenshot

When the installation process is complete, the disk image will be available in the output-virtualbox-iso folder with the vmdk extension.

Packer done screenshot

The disk image is now ready to be put in initramfs.

Putting the disk image in initramfs

This section is quite similar to the previous blog post CentOS 7 root filesystem on tmpfs but with minor differences. For simplicity reasons it is executed on a host running CentOS 7.

Create the build directories:

$ mkdir /work
$ mkdir /work/newroot
$ mkdir /work/result

Export the files from the disk image to one of the directories we created earlier:

$ export LIBGUESTFS_BACKEND=direct
$ guestfish --ro -a packer-virtualbox-iso-1508790384-disk001.vmdk -i copy-out / /work/newroot/

Modify /etc/fstab:

$ cat > /work/newroot/etc/fstab << EOF
tmpfs       /         tmpfs    defaults,noatime 0 0
none        /dev      devtmpfs defaults         0 0
devpts      /dev/pts  devpts   gid=5,mode=620   0 0
tmpfs       /dev/shm  tmpfs    defaults         0 0
proc        /proc     proc     defaults         0 0
sysfs       /sys      sysfs    defaults         0 0
EOF

Disable selinux:

echo "SELINUX=disabled" > /work/newroot/etc/selinux/config

Disable clearing the screen on login failure to make it possible to read any error messages:

mkdir /work/newroot/etc/systemd/system/getty@.service.d
cat > /work/newroot/etc/systemd/system/getty@.service.d/noclear.conf << EOF
[Service]
TTYVTDisallocate=no
EOF

Now jump to the Initramfs and Result sections in the CentOS 7 root filesystem on tmpfs and follow those steps until the end when the result is a vmlinuz and initramfs file.

ZFS configuration

The first time the NAS server boots on the disk image, the ZFS storage pool and volumes will have to be configured. Refer to the ZFS documentation for information on how to do this, and use the following command only as guidelines.

Create the storage pool:

$ sudo zpool create data mirror sda sdb mirror sdc sdd

Create the volumes:

$ sudo zfs create data/documents
$ sudo zfs create data/games
$ sudo zfs create data/movies
$ sudo zfs create data/music
$ sudo zfs create data/pictures
$ sudo zfs create data/upload

Share some volumes using NFS:

zfs set sharenfs=on data/documents
zfs set sharenfs=on data/games
zfs set sharenfs=on data/music
zfs set sharenfs=on data/pictures

Print the storage pool status:

$ sudo zpool status
  pool: data
 state: ONLINE
  scan: scrub repaired 0B in 20h22m with 0 errors on Sun Oct  1 21:04:14 2017
config:

	NAME        STATE     READ WRITE CKSUM
	data        ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sdd     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0
	  mirror-1  ONLINE       0     0     0
	    sda     ONLINE       0     0     0
	    sdb     ONLINE       0     0     0

errors: No known data errors

October 23, 2017 11:20 PM

February 13, 2017

Mimes brønn

En innsynsbrønn full av kunnskap

Mimes brønn er en nettjeneste som hjelper deg med å be om innsyn i offentlig forvaltning i tråd med offentleglova og miljøinformasjonsloven. Tjenesten har et offentlig tilgjengelig arkiv over alle svar som er kommet på innsynsforespørsler, slik at det offentlige kan slippe å svare på de samme innsynshenvendelsene gang på gang. Du finner tjenesten på

https://www.mimesbronn.no/

I følge gammel nordisk mytologi voktes kunnskapens kilde av Mime og ligger under en av røttene til verdenstreet Yggdrasil. Å drikke av vannet i Mimes brønn ga så verdifull kunnskap og visdom at den unge guden Odin var villig til å gi et øye i pant og bli enøyd for å få lov til å drikke av den.

Nettstedet vedlikeholdes av foreningen NUUG og er spesielt godt egnet for politisk interesserte personer, organisasjoner og journalister. Tjenesten er basert på den britiske søstertjenesten WhatDoTheyKnow.com, som allerede har gitt innsyn som har resultert i dokumentarer og utallige presseoppslag. I følge mySociety for noen år siden gikk ca 20 % av innsynshenvendelsene til sentrale myndigheter via WhatDoTheyKnow. Vi i NUUG håper NUUGs tjeneste Mimes brønn kan være like nyttig for innbyggerne i Norge.

I helgen ble tjenesten oppdatert med mye ny funksjonalitet. Den nye utgaven fungerer bedre på små skjermer, og viser nå leveringsstatus for henvendelsene slik at innsender enklere kan sjekke at mottakers epostsystem har bekreftet mottak av innsynshenvendelsen. Tjenesten er satt opp av frivillige i foreningen NUUG på dugnad, og ble lansert sommeren 2015. Siden den gang har 121 brukere sendt inn mer enn 280 henvendelser om alt fra bryllupsutleie av Operaen og forhandlinger om bruk av Norges topp-DNS-domene .bv til journalføring av søknader om bostøtte, og nettstedet er en liten skattekiste av interessant og nyttig informasjon. NUUG har knyttet til seg jurister som kan bistå med å klage på manglende innsyn eller sviktende saksbehandling.

– «NUUGs Mimes brønn var uvurderlig da vi lyktes med å sikre at DNS-toppdomenet .bv fortsatt er på norske hender,» forteller Håkon Wium Lie.

Tjenesten dokumenterer svært sprikende praksis i håndtering av innsynshenvendelser, både når det gjelder responstid og innhold i svarene. De aller fleste håndteres raskt og korrekt, men det er i flere tilfeller gitt innsyn i dokumenter der ansvarlig etat i ettertid ønsker å trekke innsynet tilbake, og det er gitt innsyn der sladdingen har vært utført på en måte som ikke skjuler informasjonen som skal sladdes.

– «Offentlighetsloven er en bærebjelke for vårt demokrati. Den bryr seg ikke med hvem som ber om innsyn, eller hvorfor. Prosjektet Mimes brønn innebærer en materialisering av dette prinsippet, der hvem som helst kan be om innsyn og klage på avslag, og hvor dokumentasjon gjøres offentlig. Dette gjør Mimes Brønn til et av de mest spennende åpenhetsprosjektene jeg har sett i nyere tid.» forteller mannen som fikk åpnet opp eierskapsregisteret til skatteetaten, Vegard Venli.

Vi i foreningen NUUG håper Mimes brønn kan være et nyttig verktøy for å holde vårt demokrati ved like.

by Mimes Brønn atFebruary 13, 2017 02:07 PM

January 06, 2017

Espen Braastad

CentOS 7 root filesystem on tmpfs

Several years ago I wrote a series of posts on how to run EL6 with its root filesystem on tmpfs. This post is a continuation of that series, and explains step by step how to run CentOS 7 with its root filesystem in memory. It should apply to RHEL, Ubuntu, Debian and other Linux distributions as well. The post is a bit terse to focus on the concept, and several of the steps have potential for improvements.

The following is a screen recording from a host running CentOS 7 in tmpfs:

Sensor

Build environment

A build host is needed to prepare the image to boot from. The build host should run CentOS 7 x86_64, and have the following packages installed:

yum install libvirt libguestfs-tools guestfish

Make sure the libvirt daemon is running:

systemctl start libvirtd

Create some directories that will be used later, however feel free to relocate these to somewhere else:

mkdir -p /work/initramfs/bin
mkdir -p /work/newroot
mkdir -p /work/result

Disk image

For simplicity reasons we’ll fetch our rootfs from a pre-built disk image, but it is possible to build a custom disk image using virt-manager. I expect that most people would like to create their own disk image from scratch, but this is outside the scope of this post.

Use virt-builder to download a pre-built CentOS 7.3 disk image and set the root password:

virt-builder centos-7.3 -o /work/disk.img --root-password password:changeme

Export the files from the disk image to one of the directories we created earlier:

guestfish --ro -a /work/disk.img -i copy-out / /work/newroot/

Clear fstab since it contains mount entries that no longer apply:

echo > /work/newroot/etc/fstab

SELinux will complain about incorrect disk label at boot, so let’s just disable it right away. Production environments should have SELinux enabled.

echo "SELINUX=disabled" > /work/newroot/etc/selinux/config

Disable clearing the screen on login failure to make it possible to read any error messages:

mkdir /work/newroot/etc/systemd/system/getty@.service.d
cat > /work/newroot/etc/systemd/system/getty@.service.d/noclear.conf << EOF
[Service]
TTYVTDisallocate=no
EOF

Initramfs

We’ll create our custom initramfs from scratch. The boot procedure will be, simply put:

  1. Fetch kernel and a custom initramfs.
  2. Execute kernel.
  3. Mount the initramfs as the temporary root filesystem (for the kernel).
  4. Execute /init (in the initramfs).
  5. Create a tmpfs mount point.
  6. Extract our CentOS 7 root filesystem to the tmpfs mount point.
  7. Execute switch_root to boot on the CentOS 7 root filesystem.

The initramfs will be based on BusyBox. Download a pre-built binary or compile it from source, put the binary in the initramfs/bin directory. In this post I’ll just download a pre-built binary:

wget -O /work/initramfs/bin/busybox https://www.busybox.net/downloads/binaries/1.26.1-defconfig-multiarch/busybox-x86_64

Make sure that busybox has the execute bit set:

chmod +x /work/initramfs/bin/busybox

Create the file /work/initramfs/init with the following contents:

#!/bin/busybox sh

# Dump to sh if something fails
error() {
	echo "Jumping into the shell..."
	setsid cttyhack sh
}

# Populate /bin with binaries from busybox
/bin/busybox --install /bin

mkdir -p /proc
mount -t proc proc /proc

mkdir -p /sys
mount -t sysfs sysfs /sys

mkdir -p /sys/dev
mkdir -p /var/run
mkdir -p /dev

mkdir -p /dev/pts
mount -t devpts devpts /dev/pts

# Populate /dev
echo /bin/mdev > /proc/sys/kernel/hotplug
mdev -s

mkdir -p /newroot
mount -t tmpfs -o size=1500m tmpfs /newroot || error

echo "Extracting rootfs... "
xz -d -c -f rootfs.tar.xz | tar -x -f - -C /newroot || error

mount --move /sys /newroot/sys
mount --move /proc /newroot/proc
mount --move /dev /newroot/dev

exec switch_root /newroot /sbin/init || error

Make sure it is executable:

chmod +x /work/initramfs/init

Create the root filesystem archive using tar. The following command also uses xz compression to reduce the final size of the archive (from approximately 1 GB to 270 MB):

cd /work/newroot
tar cJf /work/initramfs/rootfs.tar.xz .

Create initramfs.gz using:

cd /work/initramfs
find . -print0 | cpio --null -ov --format=newc | gzip -9 > /work/result/initramfs.gz

Copy the kernel directly from the root filesystem using:

cp /work/newroot/boot/vmlinuz-*x86_64 /work/result/vmlinuz

Result

The /work/result directory now contains two files with file sizes similar to the following:

ls -lh /work/result/
total 277M
-rw-r--r-- 1 root root 272M Jan  6 23:42 initramfs.gz
-rwxr-xr-x 1 root root 5.2M Jan  6 23:42 vmlinuz

These files can be loaded directly in GRUB from disk, or using iPXE over HTTP using a script similar to:

#!ipxe
kernel http://example.com/vmlinuz
initrd http://example.com/initramfs.gz
boot

January 06, 2017 08:34 PM

July 15, 2016

Mimes brønn

Hvem har drukket fra Mimes brønn?

Mimes brønn har nå vært oppe i rundt et år. Derfor vi tenkte det kunne være interessant å få en kortfattet statistikk om hvordan tjenesten er blitt brukt.

I begynnelsen av juli 2016 hadde Mimes brønn 71 registrerte brukere som hadde sendt ut 120 innsynshenvendelser, hvorav 62 (52%) var vellykkede, 19 (16%) delvis vellykket, 14 (12%) avslått, 10 (8%) fikk svar at organet ikke hadde informasjonen, og 12 henvendelser (10%; 6 fra 2016, 6 fra 2015) fortsatt var ubesvarte. Et fåtall (3) av hendvendelsene kunne ikke kategoriseres. Vi ser derfor at rundt to tredjedeler av henvendelsene var vellykkede, helt eller delvis. Det er bra!

Tiden det tar før organet først sender svar varierer mye, fra samme dag (noen henvendelser sendt til Utlendingsnemnda, Statens vegvesen, Økokrim, Mediatilsynet, Datatilsynet, Brønnøysundregistrene), opp til 6 måneder (Ballangen kommune) eller lenger (Stortinget, Olje- og energidepartementet, Justis- og beredskapsdepartementet, UDI – Utlendingsdirektoratet, og SSB har mottatt innsynshenvendelser som fortsatt er ubesvarte). Gjennomsnittstiden her var et par uker (med unntak av de 12 tilfellene der det ikke har kommet noe svar). Det følger av offentlighetsloven § 29 første ledd at henvendelser om innsyn i forvaltningens dokumenter skal besvares «uten ugrunnet opphold», noe som ifølge Sivilombudsmannen i de fleste tilfeller skal fortolkes som «samme dag eller i alle fall i løpet av 1-3 virkedager». Så her er det rom for forbedring.

Klageretten (offentleglova § 32) ble benyttet i 20 av innsynshenvendelsene. I de fleste (15; 75%) av tilfellene førte klagen til at henvendelsen ble vellykket. Gjennomsnittstiden for å få svar på klagen var en måned (med unntak av 2 tillfeller, klager sendt til Statens vegvesen og Ruter AS, der det ikke har kommet noe svar). Det er vel verdt å klage, og helt gratis! Sivilombudsmannen har uttalt at 2-3 uker ligger over det som er akseptabel saksbehandlingstid for klager.

Flest henvendelser var blitt sendt til Utenriksdepartementet (9), tett etterfulgt av Fredrikstad kommune og Brønnøysundregistrene. I alt ble henvendelser sendt til 60 offentlige myndigheter, hvorav 27 ble tilsendt to eller flere. Det står over 3700 myndigheter i databasen til Mimes brønn. De fleste av dem har dermed til gode å motta en innsynshenvendelse via tjenesten.

Når vi ser på hva slags informasjon folk har bedt om, ser vi et bredt spekter av interesser; alt fra kommunens parkeringsplasser, reiseregninger der statens satser for overnatting er oversteget, korrespondanse om asylmottak og forhandlinger om toppdomenet .bv, til dokumenter om Myanmar.

Myndighetene gjør alle mulige slags ting. Noe av det gjøres dÃ¥rlig, noe gjør de bra. Jo mer vi finner ut om hvordan  myndighetene fungerer, jo større mulighet har vi til Ã¥ foreslÃ¥ forbedringer pÃ¥ det som fungerer dÃ¥rlig… og applaudere det som  bra.  Er det noe du vil ha innsyn i, sÃ¥ er det bare Ã¥ klikke pÃ¥ https://www.mimesbronn.no/ og sÃ¥ er du i gang 🙂

by Mimes Brønn atJuly 15, 2016 03:56 PM

June 01, 2016

Kevin Brubeck Unhammer

Maskinomsetjing vs NTNU-eksaminator

Twitter-brukaren @IngeborgSteine fekk nyleg ein del merksemd då ho tvitra eit bilete av nynorskutgåva av økonomieksamenen sin ved NTNU:

Dette var min økonomieksamen på "nynorsk". #nynorsk #noregsmållag #kvaialledagar https://t.co/RjCKSU2Fyg
Ingeborg Steine (@IngeborgSteine) May 30, 2016

Kreative nyvinningar som *kvisleis og alle dialektformene og arkaismane ville vore usannsynlege å få i ei maskinomsett utgåve, så då lurte eg på kor mykje betre/verre det hadde blitt om eksaminatoren rett og slett hadde brukt Apertium i staden? Ingeborg Steine var så hjelpsam at ho la ut bokmålsutgåva, så då får me prøva 🙂

NTNU-nob-nno.jpeg

Ingen kvisleis og fritt for tær og fyr, men det er heller ikkje perfekt: Visse ord manglar frå ordbøkene og får dermed feil bøying, teller blir tolka som substantiv, ein anna maskin har feil bøying på førsteordet (det mangla ein regel der) og at blir ein stad tolka som adverb (som fører til det forunderlege fragmentet det verta at anteke tilvarande). I tillegg blir språket gjenkjent som tatarisk av nettsida, så det var kanskje litt tung norsk? 🙂 Men desse feila er ikkje spesielt vanskelege å retta på – utviklingsutgåva av Apertium gir no:

NTNU-nob-nno-svn.jpeg

Det er enno eit par småting som kunne vore retta, men det er allereie betre enn dei fleste eksamenane eg fekk utdelt ved UiO …

by unhammer atJune 01, 2016 09:45 AM

October 18, 2015

Anders Nordby

Fighting spam with SpamAssassin, procmail and greylisting

On my private server we use a number of measures to stop and prevent spam from arriving in the users inboxes: - postgrey (greylisting) to delay arrival (hopefully block lists will be up to date in time to stop unwanted mail, also some senders do not retry) - SpamAssasin to block mails by scoring different aspects of the emails. Newer versions of it has URIBL (domain based, for links in the emails) in addtition to the tradional RBL (IP based) block lists. Which works better. I also created my own URIBL block list which you can use, dbl.fupp.net. - Procmail. For user on my server, I recommend this procmail rule: :0 * ^X-Spam-Status: Yes .crapbox/ It will sort emails that has a score indicating it is spam into mailbox "crapbox". - blocking unwanted and dangerous attachments, particularly for Windows users.

by Anders (noreply@blogger.com) atOctober 18, 2015 01:09 PM

April 23, 2015

Kevin Brubeck Unhammer

Orddelingsomsetjing

I førre innlegg i denne serien gjekk eg kort gjennom ymse metodar for å generera omsetjingskandidatar til tospråklege ordbøker; i dette innlegget skal eg gå litt meir inn på kandidatgenerering ved omsetjing av enkeltdelane av samansette ord. Me har som nemnt allereie ei ordbok mellom bokmål og nordsamisk, som me vil utvida til bokmål–lulesamisk og bokmål–sørsamisk. Og ordboka blei utvikla for å omsetja typisk «departementsspråk», så ho er full av lange, samansette ord. Og på samisk kan me setja saman ord omtrent på same måte som på norsk (i tillegg til ein haug med andre måtar, men det hoppar me glatt over for no). Dette bør me kunna utnytta, sånn at viss me veit kva «klage» er på lulesamisk, og me veit kva «frist» er, så har me iallfall éin fornuftig hypotese for kva «klagefrist» kan vera på lulesamisk 🙂

Orddeling er flott når du skal omsetja ordbøker. Særskrivingsfeil er flott når du vil smila litt.
«Ananássasuorma» jali «ananássa riŋŋgu»? Ij le buorre diehtet.

Altså kan me bruka dei få omsetjingane me allereie har mellom bokmål og lulesamisk/sørsamisk til å laga fleire omsetjingar, ved å omsetja deler av ord, og så setja dei saman igjen. Me har òg eit par omsetjingar liggande mellom nordsamisk og lulesamisk/sørsamisk, så me kan bruka same metoden der (og utnytta det at me har ei bokmål–nordsamisk-ordbok til å slutta riŋgen tilbake til bokmål).

Dekning og presisjon

Dessverre (i denne samanhengen) har me òg ofte fleire omsetjingar av kvart ord; i dei eksisterande bokmål–lulesamisk-ordbøkene me ser på (i stor grad basert på ordboka til Anders Kintel) står det at «klage» kan vera mellom anna gujdalvis, gujddim, luodjom eller kritihkka, medan «frist» kan vera  ájggemierre, giehtadaláduvvat, mierreduvvam eller ájggemærráj. Viss me tillet kvar venstredel å gå med kvar høgredel, får me 16 moglege kandidatar for dette eine ordet! Sannsynlegvis er ikkje meir enn ein eller to av dei brukande (og kanskje ikkje det ein gong). I snitt får me rundt dobbelt så mange kandidatar som kjeldeord med denne metoden. Så me bør finna metodar for å kutta ned på dårlege kandidatar.

Den komplementære utfordringa er å få god nok dekning. Av og til ser me at me ikkje har ei omsetjing av delane av ordet, sjølv om me har omsetjingar av ord med dei same delene i seg. Den setninga krev nok eit døme 🙂 Me vil gjerne ha ein kandidat for ordet «øyekatarr» på lulesamisk, altså samansetjinga «øye+katarr». Me har kanskje ei omsetjing for «øye» i materialet vårt, men ingenting for «katarr». Derimot står det at «blærekatarr» er gådtjåráhkkovuolssje. Så for å utvida dekninga, kan me i tillegg dela opp kjeldematerialet vårt i alle par av samansetjingsdelar; viss me veit at desse orda kan analyserast som «blære+katarr» og gådtjåráhkko+vuolssje, så kan det jo synast som at «blære» er gådtjåráhkko og «katarr» er vuolssje (og Giellatekno har heldigvis gode morfologiske analysatorar som fint deler opp slike ord på rette staden). Og dette gir ei god utviding av materialet – faktisk får me kandidatar for nesten dobbelt så mange av dei orda som me ønsker kandidatar for, viss me utvidar kjeldematerialet på denne måten. Men det har ei stor ulempe òg: Me får over dobbelt så mange lule-/sørsamiske kandidatar per bokmålsord (i snitt rundt fire kandidatar per kjeldeord).

Filtrering og rangering

Me vil innskrenka dei moglege kandidatane til dei som mest sannsynleg er gode. Den beste testen er å sjå om kandidaten finst i korpus, og då helst i same parallellstilte setning (dette er oftast ein bra kandidat). Viss ikkje, så kan me òg sjå på om kandidaten og kjeldeordet har liknande frekvensar, eller om kandidaten har frekvens i det heile.

Orddelingsomsetjinga foreslo tsavtshvierhtie for «virkemiddel», og der stod dei i ein parallellsetning òg:
<s xml:lang="sma" id="2060"/>Daesnie FoU akte vihkeles tsavtshvierhtie .
<s xml:lang="nob" id="2060"/>Her er FoU er et viktig virkemiddel .

– då er det nok eit godt ordpar.

Uheldigvis har me så lite tekstgrunnlag for lule-/sørsamisk at me fort går tom for kandidatar med frekvens i det heile. For sørsamisk har me t.d. berre kandidatar med korpustreff for rundt 10 % av orda me lagar kandidatar for.

Ein annan test, som fungerer på alle ord, er å sjå om det får analyse av dei morfologiske analysatorane våre; viss ikkje (og viss det i tillegg ikkje har korpustreff) er det oftast feil. Men dette fjernar berre rundt 1/4 av kandidatane; med den oppdelte ordboka vår (kor me òg har med par av delar av ord) har me enno i snitt rundt tre kandidatar per kjeldeord.

(Ein test som eg prøvde, men avslo, var filtrering basert på liknande ordlengd. Det verkar jo logisk at lange ord blir omsett til lange og korte til korte, men det finst mange gode unntak. I tillegg fjernar det alt for få dårlege kandidatar til at det ser ut til å vera verdt det.)

Det parallelle korpusmaterialet vårt er altfor lite, men når me skal generera kandidatar til ordbøker så er det jo ikkje parallelle setningar me prøver å predikera, men parallelle ord og ordbokspar. Og då er jo læringsgrunnlaget vårt eigentleg dei eksisterande ordbøkene våre … Derfor prøvde eg å sjå på kva for samansetjingsdelar som faktisk var brukt i dei tidlegare omsetjingane våre, og kva for par av delar som ofte opptredde i tidlegare omsetjingar, og kva for delar som sjeldan eller aldri gjorde det. Til dømes har den oppdelte ordboka vår for bokmål–lulesamisk desse para:

Her ser me at «løyve» anten kan vera loahpádus eller doajmmaloahpe – skal «taxiløyve» då vera táksiloahpádus eller táksidoajmmaloahpe? På bakgrunn av dette materialet bør me nok satsa på det første – sjølv om doajmmaloahpe står oppført, så er det berre loahpádus som opptrer i samansette ord.

Då kan me prøva å generera kandidatar for alle bokmålsorda i materialet vårt, både dei me eigentleg er ute etter å finna kandidatar for, og dei me allereie har omsetjingar for. Gå så gjennom dei genererte kandidatane for dei orda me allereie har omsetjingar for, og tel opp dei para av orddelar som genererte slike ord. Me har kanskje laga kandidatane barggo+loahpádus og barggo+dajmmaloahpe for «arbeids+løyve»; når me så går gjennom dei eksisterande omsetjingane og finn at «arbeidsløyve» stod i ordboka med omsetjinga barggoloahpádus, så aukar me frekvensen til paret «løyve»–loahpádus med éin, medan «løyve»–dajmmaloahpe blir verande null.

For no har berre filtrert ut dei kandidatane kor paret til anten første- eller andreledd hadde nullfrekvens. I følgje litt manuell evaluering frå ein lingvist er det omtrent berre dårlege ord som blir kasta ut, så det filteret ser ut til å fungera bra. På den andre sida blir berre rundt 10 % av kandidatane fjerna viss me berre hiv ut dei med nullfrekvens, så neste steg blir å bruka frekvensane til å få ei full rangering.

Viss alle ord kunne delast i nøyaktig to delar, så ville det kanskje vore nok å telja opp par av delar og enkeltdelar for å estimera sannsyn, altså f(s,t)/f(s).  Men av og til kan ord delast på fleire måtar, til dømes kan me sjå på «sommersiidastyre» som «sommer+siidastyre» eller «sommersiida+styre» (eg har valt å halda meg til todelingar av ord, for å unngå for mange alternative kandidatar). Viss omsetjinga er giessesijddastivrra, med analysane giesse+sijddastivrra eller giessesijdda+stivrra, så har me ikkje utan vidare nokon grunn til å velja den eine over den andre (vel, me har lengd i dette tilfellet, men det gjeld ikkje i alle slike døme, og me kan ha par av analysar som er 2–3 eller 3–2). Då kan me heller ikkje seia kva for par av orddelar (s,t) me skal auka når me ser «sommersiidastyre»–giessesijddastivrra i treningsmaterialet. Men viss me i tillegg ser «styre»–stivvra ein annan stad, så har me plutseleg eit grunnlag til å ta ei avgjerd. Metodar som Expectation Maximization kan kombinera relaterte frekvensar på denne måten for å finna fram til gode estimat, men eg har ikkje komme så langt at eg har fått implementert dette enno.

by unhammer atApril 23, 2015 06:11 PM

April 14, 2015

NUUG events video archive

20111108_lisp

April 14, 2015 11:13 AM

January 06, 2015

thefastestwaytobreakamachine

NSA-proof SSH

ssh-pictureOne of the biggest takeaways from 31C3 and the most recent Snowden-leaked NSA documents is that a lot of SSH stuff is .. broken.

I’m not surprised, but then again I never am when it comes to this paranoia stuff. However, I do run a ton of SSH in production and know a lot of people that do. Are we all fucked? Well, almost, but not really.

Unfortunately most of what Stribika writes about the “Secure Secure Shell” doesn’t work for old production versions of SSH. The cliff notes for us real-world people, who will realistically be running SSH 5.9p1 for years is hidden in the bettercrypto.org repo.

Edit your /etc/ssh/sshd_config:


Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160
KexAlgorithms diffie-hellman-group-exchange-sha256

sshh
Basically the nice and forward secure aes-*-gcm chacha20-poly1305 ciphers, the curve25519-sha256 Kex algorithm and Encrypt-Then-MAC message authentication modes are not available to those of us stuck in the early 2000s. That’s right, provably NSA-proof stuff not supported. Upgrading at this point makes sense.

Still, we can harden SSH, so go into /etc/ssh/moduli and delete all the moduli that have 5th column < 2048, and disable ECDSA host keys:

cd /etc/ssh
mkdir -p broken
mv moduli ssh_host_dsa_key* ssh_host_ecdsa_key* ssh_host_key* broken
awk '{ if ($5 > 2048){ print } }' broken/moduli > moduli
# create broken links to force SSH not to regenerate broken keys
ln -s ssh_host_ecdsa_key ssh_host_ecdsa_key
ln -s ssh_host_dsa_key ssh_host_dsa_key
ln -s ssh_host_key ssh_host_key

Your clients, which hopefully have more recent versions of SSH, could have the following settings in /etc/ssh/ssh_config or .ssh/config:

Host all-old-servers

    Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-ripemd160
    KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Note: Sadly, the -ctr ciphers do not provide forward security and hmac-ripemd160 isn’t the strongest MAC. But if you disable these, there are plenty of places you won’t be able to connect to. Upgrade your servers to get rid of these poor auth methods!

Handily, I have made a little script to do all this and more, which you can find in my Gone distribution.

There, done.

sshh obama

Updated Jan 6th to highlight the problems of not upgrading SSH.
Updated Jan 22nd to note CTR mode isn’t any worse.
Go learn about COMSEC if you didn’t get trolled by the title.

by kacper atJanuary 06, 2015 04:33 PM

December 08, 2014

thefastestwaytobreakamachine

sound sound

Intermission..

Recently I been doing some video editing.. less editing than tweaking my system tho.
If you want your jack output to speak with Kdenlive, a most excellent video editing suite,
and output audio in a nice way without choppyness and popping, which I promise you is not nice,
you’ll want to pipe it through pulseaudio because the alsa to jack stuff doesn’t do well with phonom, at least not on this convoluted setup.

Remember, to get that setup to work, ALSA pipes to jack with the pcm.jack { type jack .. thing, and remove the alsa to pulseaudio stupidity at /usr/share/alsa/alsa.conf.d/50-pulseaudio.conf

So, once that’s in place, it won’t play even though Pulse found your Jack because your clients are defaulting out on some ALSA device… this is when you change /etc/pulse/client.conf and set default-sink = jack_out.

by kacper atDecember 08, 2014 12:18 AM

February 24, 2013

Bjørn Venn

Chromebook; a real cloud computer – but will it work in the clouds?

Lyst på én? Den er ikke i salg i Norge enda, men du kan kjøpe den på Amazon. Les her hvordan jeg kjøpte min på Amazon (bla litt nedover på siden). Med norsk moms, levert til Rimi-butikken 100 meter fra der jeg bor, kom den på 1.850 kroner. Det er den så absolutt verdt:)

by Bjorn Venn atFebruary 24, 2013 07:34 PM

February 22, 2013

Bjørn Venn

Hvem klarer å skaffe meg en slik før påske?

Chromebook pixel

Den nye Chromebook-en til Google, Chromebook Pixel. Foreløbig kun i salg i USA og UK via Google Play og BestBuy.

Verden er urettferdig:)

by Bjorn Venn atFebruary 22, 2013 12:44 PM

October 31, 2011

Anders Nordby

Taile wtmp-logg i 64-bit Linux med Perl?

Jeg liker å la ting skje hendelsesbasert, og har i den forbindelse lagd et script for å rsynce innhold etter opplasting med FTP. Jeg tailer da wtmp-loggen med Perl, og starter sync når brukeren er eller har blitt logget ut (kort idle timeout). Å taile wtmp i FreeBSD var noe jeg for lenge siden fant et fungerende eksempel på nettet:
$typedef = 'A8 A16 A16 L'; $sizeof = length pack($typedef, () ); while ( read(WTMP, $buffer, $sizeof) == $sizeof ) { ($line, $user, $host, $time) = unpack($typedef, $buffer); # Gjør hva du vil med disse verdiene her }
FreeBSD bruker altså bare verdiene line (ut_line), user (ut_name), host (ut_host) og time (ut_time), jfr. utmp.h. Linux (x64, hvem bryr seg om 32-bit?) derimot, lagrer en hel del mer i wtmp-loggen, og etter en del Googling, prøving/feiling og kikking i bits/utmp.h kom jeg frem til:
$typedef = "s x2 i A32 A4 A32 A256 s2 l i2 i4 A20"; $sizeof = length pack($typedef, () ); while ( read(WTMP, $buffer, $sizeof) == $sizeof ) { ($type, $pid, $line, $id, $user, $host, $term, $exit, $session, $sec, $usec, $addr, $unused) = unpack($typedef, $buffer); # Gjør hva du vil med disse verdiene her }
Som bare funker, flott altså. Da ser jeg i sanntid brukere som logger på og av, og kan ta handlinger basert på dette.

by Anders (noreply@blogger.com) atOctober 31, 2011 07:37 PM

A complete feed is available in any of your favourite syndication formats linked by the buttons below.

[RSS 1.0 Feed] [RSS 2.0 Feed] [Atom Feed] [FOAF Subscriptions] [OPML Subscriptions]

Subscriptions