I guess it is time to bring some light on the various free software and open culture activities and projects I have worked on or been involved in the last year and a half.
First, lets mention the book releases I managed to publish. The Cory Doctorow book "Hvordan knuse overvåkningskapitalismen" argue that it is not the magic machine learning of the big technology companies that causes the surveillance capitalism to thrive, it is the lack of trust busting to enforce existing anti-monopoly laws. I also published a family of dictionaries for machinists, one sorted on the English words, one sorted on the Norwegian and the last sorted on the North Sámi words. A bit on the back burner but not forgotten is the Debian Administrators Handbook, where a new edition is being worked on. I have not spent as much time as I want to help bring it to completion, but hope I will get more spare time to look at it before the end of the year.
With my Debian had I have spent time on several projects, both updating existing packages, helping to bring in new packages and working with upstream projects to try to get them ready to go into Debian. The list is rather long, and I will only mention my own isenkram, openmotor, vlc bittorrent plugin, xprintidle, norwegian letter style for latex, bs1770gain, and recordmydesktop. In addition to these I have sponsored several packages into Debian, like audmes.
The last year I have looked at several infrastructure projects for collecting meter data and video surveillance recordings. This include several ONVIF related tools like onvifviewer and zoneminder as well as rtl-433, wmbusmeters and rtl-wmbus.
In parallel with this I have looked at fabrication related free software solutions like pycam and LinuxCNC. The latter recently gained improved translation support using po4a and weblate, which was a harder nut to crack that I had anticipated when I started.
Several hours have been spent translating free software to Norwegian Bokmål on the Weblate hosted service. Do not have a complete list, but you will find my contributions in at least gnucash, minetest and po4a.
I also spent quite some time on the Norwegian archiving specification Noark 5, and its companion project Nikita implementing the API specification for Noark 5.
Recently I have been looking into free software tools to do company accounting here in Norway., which present an interesting mix between law, rules, regulations, format specifications and API interfaces.
I guess I should also mention the Norwegian community driven government interfacing projects Mimes Brønn and Fiksgatami, which have ended up in a kind of limbo while the future of the projects is being worked out.
These are just a few of the projects I have been involved it, and would like to give more visibility. I'll stop here to avoid delaying this post.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
This past week was truly one for the blooper reel. A public cloud service provider let the great unwashed into the address ranges published as safe mailers via their SPF records, with hilarious if rather predictable results. Next up, we find an intensive advertising campaign for spamware aimed at our imaginary friends. And the password guessing aimed at an ever-expanding dictionary of non-existing users continues.
To the rest of the world, bsdly.net is known variously as a honeypot, a source of various kinds of blocklists or a frequent target of domain joejobs that contribute to the ever-expanding list of imaginary friends also know as spamtraps.
To me and a very small set of other people, it's home on the net, providing a set of services we need fairly painlessly on an OpenBSD platform that rarely requires much work besides the odd pkg_add -u -D snap followed by sysupgrade -s (yes, we jump from snapshot to snapshot on this one).
Then the past week served us with three separate events that, while actually harmless to our side, together serve to show that a certain subset of humans would perhaps be better diverted to activities that do not involve computers.
Note: This piece is also available, with more basic formatting but with no trackers, here.
The first event started on Tuesday. While looking for something else entirely in my mail server logs, I noticed an unusually high number of delivery attempts to what definitely looked like spamtraps reaching the actual mail server. The earliest entry was from that morning:
2022-03-29 06:28:31 H=ministeriodesanidad34.yomevacunoseguro.com [20.104.226.220] X=TLS1.2:ADH-AES256-GCM-SHA384:256 CV=no F=<root@ministeriodesanidad34.yomevacunoseguro.com> rejected RCPT <claus-leba@ehtrib.org>: Unknown user
There were several thousand entries of that type (full log here, extracted source IP addresses here). My initial impulse was of course to check the logs to see how they got past spamd in the first place. Oddly, I found no trace of any activity involving spamd and a random sampling of those IP addresses. That would in turn indicate that they had been pass-listed, most likely by being included in the permanent nospamd pass list, which we generate here mainly based on the published SPF records of domains we need to communicate with.
Again taking a random subset of the extracted IP addresses and using whois identified the IP address range owner as a large cloud services provider that among other things also provide hosted or hybrid on premise plus hosted email service. This in turn means that domains that use those services also include large segments of that provider's IP address ranges in their SPF records. Not quite knowing what to do I tweeted,
Grumble.
— Peter N. M. Hansteen (@pitrh) March 29, 2022
Spam campaigns sending from ranges in our #nospamd tables (and hitting actual SMTP service) due to being in a major #cloud operator's #SPF records, sending to a large chunk of our #spamtraps.
Still developing, blogworthy, Y or N?
In addition to tweeting and looking for feedback (which was not huge but dominated by Y answers), I notified the relevant abuse@ address by mail, including links to the log data and the IP addresses.
I also tweaked the log reader hinted at in this earlier piece so any attempt at delivering mail from that domain in the future will put the sending IP address safely away both in the spamd blocklist and in the safety of a table that is subject to block drop quick from for six weeks after the activity stops, and exported to downloadable blocklists as described in the article I referenced earlier.
The abuse@ handlers at the company I am not naming explicitly here (yes, there's a unix command you can use to find out who if it matters to you) were quite responsive and said that the activity seemed to be coming from their public cloud section, and yes they were forwarding my data to their internal CERT. As a followup I suggested to them that using our slowly expanding list of spamtraps in their outbound filtering might be a good idea if they intend to offer SMTP-for-hire in the future.
What seems to have happened is that the miscreants here set up using a range of the provider's services including domain registration, DNS hosting, and judging from the consistent use of root@ as the sender addess, set up some number of Linux virtual machines to do the spamming.
Before the activity stopped later in the week, we identified two more campaigns that fit the pattern. The data can be found here: log entries and IP addresses for the second wave, log entries and IP addresses for the third. Each of the campaigns appear to have stopped shortly after their domains were de-registered. I never saw the contents of the messages since not a single one appears to have inboxed here.
The episode has a few teachable items: First, that some subset of our list of spamtraps is indeed incorporated in the address lists used by gullible spammers and their customers, and second, that if you run a public cloud service, you need to pay attention to what your customers do and be wary of letting them use IP address ranges that have been announced as being really safe to accept mail from.
I notified the cloud provider that I would be writing an article about the events and asked them for any and all useful input they could provide. No such information has surfaced by the time of writing. If any useful information turns up from them after publication, I will of course update this piece accordingly.
While the public cloud spammers thing was developing, I noticed another campaign that was actively targeting our spamtraps.
Sent from as far as I could tell only three IP addresses and with a total of 58 different subject lines all about spamming tools. It is possible that the campaigns did not target our spamtraps exclusively in our domains, but the log archive (1.5M compressed but expands to 40M raw) serves as testament that our imaginary friends are definitely targeted by some subset of the online marketing community, and they are pouring resources into doing that, one byte per second.
And once again, I have not seen the actual contents of the messages beyond what turns up in the logs after greytrapping kicks in. Not one of those messages found its way to an actual mailbox here.
As noted in a previous article, SSH password guessing activity went up significantly in the days leading up to the Russian invasion of Ukraine in February.
In addition to the data referenced directly in the article, the archived logs and the summaries of numbers of attempts per day (March and subsequent month summaries are archived along with other SSH log data, while the data for the current month so far gets updates several times a day) the number of attempts per day are consistently on a higher level than before the Ukraine war started, with a new higher intensity episode ongoing as I type.
One interesting feature of the password guessing attempts during the last few days is that they feature a much larger number of new user names attempted than usual. This means that the list of spamtraps here is now growing at the highest rate since the episode involving what was likely one or more phishing campaigns targeting Chinese users during early 2019 as mentioned in my 2019 summed up piece published at the end of that year.
If you have further data on these or similar incidents that you are able to share or if you want to look further into these and similar incidents, please let me know.
If you find any errors in the material I publish or disagree with my sentiments, or if you find this article interesting, useful or annoying, please let me know, either in comments or via email.
by Peter N. M. Hansteen (noreply@blogger.com) atJune 10, 2022 02:32 PM
Back in oktober last year, when I started looking at the LinuxCNC system, I proposed to change the documentation build system make life easier for translators. The original system consisted of independently written documentation files for each language, with no automated way to track changes done in other translations and no help for the translators to know how much was left to translated. By using the po4a system to generate POT and PO files from the English documentation, this can be improved. A small team of LinuxCNC contributors got together and today our labour finally payed off. Since a few hours ago, it is now possible to translate the LinuxCNC documentation on Weblate, alongside the program itself.
The effort to migrate the documentation to use po4a has been both slow and frustrating. I am very happy we finally made it.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Gingerblue 6.0.1 is Free Music Recording Software for GNOME available under GNU General Public License version 3 (or later) that now supports immediate Ogg Vorbis audio recordings in compressed Ogg Vorbis encoded audio files stored in the $HOME/Music/ folder. https://download.gnome.org/sources/gingerblue/6.0/gingerblue-6.0.1.tar.xz
Visit https://www.gingerblue.org/ and https://wiki.gnome.org/Apps/Gingerblue for more information about the GTK+/GNOME Wizard program Gingerblue for Free Music Recording Software under GNOME 42.
by oleaamot atMay 28, 2022 06:33 AM
The successor to GNOME Internet Radio Locator for GNOME 42 is available from http://download.gnome.org/sources/gnome-radio-16.0.43.tar.xz and https://wiki.gnome.org/Apps/Radio
New stations in GNOME Radio version 16.0.43 is NRK Folkemusikk (Oslo, Norway), NRK P1+ (Oslo, Norway), NRK P3X (Oslo, Norway), NRK Super (Oslo, Norway), Radio Nordfjord (Nordfjord, Norway), and Radio Ålesund (Ålesund, Norway).
Installation on Debian 11 (GNOME 42) from GNOME Terminal
sudo apt-get install gnome-common gcc git make wget
sudo apt-get install debhelper intltool dpkg-dev-el libgeoclue-2-dev
sudo apt-get install libgstreamer-plugins-bad1.0-dev libgeocode-glib-dev
sudo apt-get install gtk-doc-tools itstool libxml2-utils yelp-tools
sudo apt-get install libchamplain-0.12-dev libchamplain-gtk-0.12
sudo apt-get install libgstreamer-plugins-bad1.0-dev libgeocode-glib-dev
wget http://www.gnomeradio.org/~ole/debian/gnome-radio_16.0.43-1_amd64.deb
sudo dpkg -i gnome-radio_16.0.43-1_amd64.deb
Installation on Fedora Core 36 (GNOME 42) from GNOME Terminal
sudo dnf install http://www.gnomeradio.org/~ole/fedora/RPMS/x86_64/gnome-radio-16.0.43-1.fc36.x86_64.rpm
Installation on Ubuntu 22.04 (GNOME 42) from GNOME Terminal
sudo apt-get install gnome-common gcc git make wget
sudo apt-get install debhelper intltool dpkg-dev-el libgeoclue-2-dev
sudo apt-get install libgstreamer-plugins-bad1.0-dev libgeocode-glib-dev
sudo apt-get install gtk-doc-tools itstool libxml2-utils yelp-tools
sudo apt-get install libchamplain-0.12-dev libchamplain-gtk-0.12
sudo apt-get install libgstreamer-plugins-bad1.0-dev libgeocode-glib-dev
wget http://www.gnomeradio.org/~ole/ubuntu/gnome-radio_16.0.43-1_amd64.deb
sudo dpkg -i gnome-radio_16.0.43-1_amd64.deb
More information about GNOME Radio 16.0.43 is available on http://www.gnomeradio.org/ and http://www.gnomeradio.org/news/
by oleaamot atMay 28, 2022 05:03 AM
Some years ago Ubuntu introduced snap and said it would be better. In my experience it was slower.
And then they started packaging chromium-browser as a SNAP only, this broke the kde-plasma and kde-connect (media and phone desktop integrations, and I resorted to installing chrome from Google. This was quite easy because Chrome comes as a .deb package which also installs a apt-source so it's upgraded just like the rest of the system.
This, by the way is the apt source for Chrome, you drop it in e.g. /etc/apt/sources.list.d/google-chrome.list:
deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main
And then you install the google signing key:
wget -qO- https://dl.google.com/linux/linux_signing_key.pub | sudo tee /etc/apt/trusted.gpg.d/google-linux-signing-key.asc
Then you can do 'apt update' and 'apt install google-chrome-stable'. See also https://www.google.com/linuxrepositories/ for further information
Lately I've been using Chrome less and less privately and Firefox more and more due to the privacy issues with Chrome.
In Ubuntu 22.04 they started providing Firefox as a snap. Again breaking desktop and phone integration, actually I didn't look very hard, it was just gone and I wanted it back. There are no good apt sources for Firefox provided by the Mozilla project. The closest I could find was Firefox provided by Debian.
Which turned out to work very well, but only thanks to the apt preference system.
You make two files: First /etc/apt/sources.list.d/bullseye.list:
deb http://ftp.no.debian.org/debian/ bullseye main
deb http://security.debian.org/debian-security bullseye-security main
deb http://ftp.no.debian.org/debian/ bullseye-updates main
Then put this in /etc/apt/preferences (I'm in norway, replace "no" with other contry code if you like):
Package: *
Pin: origin "ftp.no.debian.org"
Pin-Priority: 98
Package: *
Pin: origin "security.debian.org"
Pin-Priority: 99
Package: *
Pin: release n=jammyPin-Priority: 950
Also you need to install debian repository signing keys for that:
wget -qO- https://ftp-master.debian.org/keys/archive-key-11.asc | sudo tee /etc/apt/trusted.gpg.d/bullseye.asc
wget -qO- https://ftp-master.debian.org/keys/archive-key-11-security.asc | sudo tee /etc/apt/trusted.gpg.d/bullseye-security.asc
Then you execute these two in turn:
apt updateapt install firefox-esr
And you should have firefox without getting any other things from Debian, the system will prefer Ubuntu 22.04 aka Jammy.
Big fat NOTE: This might complicate later release upgrades on your Ubuntu box. do-release-upgrade will disable your Chrome and Bullseye apt-sources, and quite possibly the preference file will be neutralized as well, but if not you might have to neutralize it yourself.
by nicolai (noreply@blogger.com) atMay 20, 2022 08:24 PM
Note: This piece is also available, with more basic formatting but with no trackers, here.
The good news is that the video does not exist. I know this, because neither does our friend Adnan here. Despite that fact, whoever operates the account presenting as Melissa appears to believe that Adnan is indeed a person who can be blackmailed. You're probably safe for now. I will provide more detail later in the article, but first a few dos and don'ts:Update 2020-02-29: For completeness and because I felt that an unsophisticated attack like the present one deserves a thorough if unsophisticated analysis, I decided to take a look at the log data for the entire 7 day period, post-rotation.
So here comes some armchair analysis, using only the tools you will find in the base system of your OpenBSD machine or any other running a sensibly stocked unix-like operating systen. We start with finding the total number of delivery attempts logged where we have the body text 'am a hacker' (this would show up only after a sender has been blacklisted, so the gross number actual delivery attempts will likely be a tad higher), with the command
zgrep "am a hacker" /var/log/spamd.0.gz | awk '{print $6}' | wc -l
which tells us the number is 3372.
Next up we use a variation of the same command to extract the source IP addresses of the log entries that contain the string 'am a hacker', sort the result while also removing duplicates and store the end result in an environment variable called lastweek:
export lastweek=`zgrep "am a hacker" /var/log/spamd.0.gz | awk '{print $6}' | tr -d ':' | sort -u `
With our list of IP addresses tucked away in the environment variable go on to: For each IP address in our lastweek set, extract all log entries and store the result (still in crude sort order by IP address), in the file 2020-02-29_i_am_hacker.raw.txt:
for foo in $lastweek ; do zgrep $foo /var/log/spamd.0.gz | tee -a 2020-02-09_i_am_hacker.raw.txt ; done
For reference I kept the list of unique IP addresses (now totalling 231) around too.
Next, we are interested in extracting the target email addresses, so the command
grep "To:" 2020-02-29_i_am_hacker.raw.txt | awk '{print substr($0,index($0,$8))}' | sort -u
finds the lines in our original extract containing "To:", and gives us the list of target addresses the sources in our data set tried to deliver mail to.
The result is preserved as 2020-02-29_i_am_hacker.raw_targets.txt, a total of 236 addresses, mostly but not all in domains we actually host here. One surprise was that among the target addresses one actually invalid address turned up that was not at that time yet a spamtrap. See the end of the activity log for details (it also turned out to be the last SMTP entry in that log for 2020-02-29).
This little round of armchair analysis on the static data set confirms the conclusions from the original article: Apart from the possibly titillating aspects of the "adult" web site mentions and the attempt at playing on the target's potential shamefulness over specific actions, as spam campaigns go, this one is ordinary to the point of being a bit boring.
There may well be other actors preying on higher-value targets through their online clumsiness and known peculiarities of taste in an actually targeted fashion, but this is not it.
A final note on tools: In this article, like all previous entries, I have exclusively used the tools you will find in the OpenBSD (or other sensibly put together unixlike operating system) base system or at a stretch as an easily available package.
For the simpler, preliminary investigations and poking around like we have done here, the basic tools in the base system are fine. But if you will be performing log analysis at scale or with any regularity for purposes that influences your career path, I would encourage you to look into setting up a proper, purpose-built log analysis system.
Several good options, open source and otherwise, are available. I will not recommend or endorse any specific one, but when you find one that fits your needs and working style you will find that after the initial setup and learning period it will save you significant time.
As per my practice, only material directly relevant to the article itself has been published via the links. If you are a professional practitioner or researcher with who can state a valid reason to need access to unpublished material, please let me know and we will discuss your project.
Update 2020-03-02: I knew I had some early samples of messages that did make it to an inbox near me squirreled away somewhere, and after a bit of rummaging I found them, stored here (note the directory name, it seemed so obvious and transparent even back then). It appears that the oldest intact messages I have are from December 2018. I am sure earlier examples can be found if we look a littler harder.
Update 2020-03-17: A fresh example turned up this morning, addressed to (of all things) the postmaster account of one of our associated .no domains, written in Norwegian (and apparently generated with Microsoft Office software). The preserved message can be downloaded here.
Update 2020-05-10: While rummaging about (aka 'researching') for something else I noticed that spamd logs were showing delivery attempts for messages with the subject "High level of danger. Your account was under attack." So out of idle curiosity on an early Sunday afternoon, I did the following:
$ export muggles=`grep " High level of danger." /var/log/spamd | awk '{print $6}' | tr -d ':' | sort -u`
$ for foo in $muggles; do grep $foo /var/log/spamd >>20200510-muggles ; done
and the result is preserved for your entertainment and/or enlightenment here. Not much to see, really other than that they sent the message in two language varieties, and to a small subset of our imaginary friends.
Update 2020-08-13: Here is another snapshot of activity from August 12 and 13: this file preserves the activity of 19 different hosts, and as we can see that since they targeted our imaginary friends first, it is unlikely they reached any inboxes here. Some of these campaigns may have managed to reach users elsewhere, though
Update 2020-09-06: Occasionally these messages manage to hit a mailbox here. Apparently enough Norwegians fall for these scams that Norwegian language versions (not terribly well worded) get aimed at users here. This example, aimed at what has only ever been an email alias made it here, slipping through by a stroke of luck during a time that IP address was briefly not in the spamd-greytrap list here, as can be seen from this log excerpt. It is also worth noting that an identically phrased message was sent from another IP address to mailer-daemon@ for one of the domains we run here.
Update 2021-01-06: For some reason, a new variant turned up here today (with a second message a few minutes later and then a third), addressed to a generic contact address here. A very quick check of logs here only turned up only this indication of anything similar (based on a search for the variant spelling PRONOGRAPHIC), but feel free to check your own logs based on these samples if you like.
Update 2021-01-16: One more round, this for my Swedish alter ego. Apparently sent from a poorly secured Vietnamese system.
Update 2021-01-18: A Norwegian version has surfaced, attempted sent to approximately 115 addresses in .no domains we handle, fortunately the majority of the addresses targeted were in fact spamtraps, as this log extract shows.
Update 2021-03-03: After a few quiet weeks, another campaign started swelling our greytrapped hosts collection, as this hourly count of IP addresses in the traplist at dump to file time shows:
Tue Mar 2 21:10:01 CET 2021 : 2425
Tue Mar 2 22:10:01 CET 2021 : 4014
Tue Mar 2 23:10:01 CET 2021 : 4685
Wed Mar 3 00:10:01 CET 2021 : 4847
Wed Mar 3 01:10:01 CET 2021 : 5759
Wed Mar 3 02:10:01 CET 2021 : 6560
Wed Mar 3 03:10:01 CET 2021 : 6774
Wed Mar 3 04:10:01 CET 2021 : 7997
Wed Mar 3 05:10:01 CET 2021 : 8231
Wed Mar 3 06:10:01 CET 2021 : 8499
Wed Mar 3 07:10:01 CET 2021 : 9910
Wed Mar 3 08:10:01 CET 2021 : 10240
Wed Mar 3 09:10:01 CET 2021 : 11872
Wed Mar 3 10:10:01 CET 2021 : 12255
Wed Mar 3 11:10:01 CET 2021 : 13689
Wed Mar 3 12:10:01 CET 2021 : 14181
Wed Mar 3 13:10:01 CET 2021 : 15259
Wed Mar 3 14:10:01 CET 2021 : 15881
Wed Mar 3 15:10:02 CET 2021 : 17061
Wed Mar 3 16:10:01 CET 2021 : 17625
Wed Mar 3 17:10:01 CET 2021 : 18758
Wed Mar 3 18:10:01 CET 2021 : 19170
Wed Mar 3 19:10:01 CET 2021 : 20028
Wed Mar 3 20:10:01 CET 2021 : 20578
Wed Mar 3 21:10:01 CET 2021 : 20997
If you have further data on these or similar incidents that you are able to share or if you want to look further into these and similar incidents, please let me know.
If you find any errors in the material I publish or disagree with my sentiments, or if you find this article interesting, useful or annoying, please let me know, either in comments or via email.
by Peter N. M. Hansteen (noreply@blogger.com) atApril 09, 2022 01:57 PM
Since 2.4.30, Apache comes with experimental support for ACME certificates (Let’s Encrypt et al.) in the form of mod_md (short for “managed domains”). It’s kind of a pain but it’s still better than what I had before, i.e. a mess of shell and Perl scripts based on Crypt::LE, and if your use case is limited to Apache, it appears to be simpler than Certbot as well. Unfortunately for me, it’s not very well documented and I wasted a considerable amount of time figuring out how to use it. Fortunately for you, I then decided to blog about it so you don’t have to repeat my mistakes.
Edit: the author of mod_md, Stefan Eissing, got in touch and pointed me to his own documentation, which is far superior to the one available from Apache.
My starting point is a freshly installed FreeBSD 13.0 server with Apache 2.4, but this isn’t really OS dependent.
First, you will need mod_ssl (of course) and a session cache, and you will need to tweak the TLS parameters, as the defaults are far from fine.
LoadModule ssl_module libexec/apache24/mod_ssl.so SSLProtocol +TLSv1.3 +TLSv1.2 SSLCipherSuite TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 SSLHonorCipherOrder off SSLCompression off LoadModule socache_dbm_module libexec/apache24/mod_socache_dbm.so SSLSessionCache dbm:/var/db/httpd_ssl_cache.db
You will also need to load mod_md, of course, and mod_watchdog, which mod_md needs to function.
LoadModule watchdog_module libexec/apache24/mod_watchdog.so LoadModule md_module libexec/apache24/mod_md.so MDCertificateAgreement accepted MDContactEmail acme@example.com
The MDCertificateAgreement
directive indicates that you have read and accepted Let’s Encrypt’s subscriber agreement, while MDContactEmail
is the email address that you used to sign up to Let’s Encrypt.
You will also need mod_rewrite to redirect HTTP requests to HTTPS and mod_headers for HSTS.
LoadModule rewrite_module libexec/apache24/mod_rewrite.so LoadModule headers_module libexec/apache24/mod_headers.so
By default, Apache only listens on port 80, so you’ll need an extra Listen
directive for port 443.
Listen 443
And as always with Apache, you should probably set ServerName
and ServerAdmin
to sensible values.
ServerName server.example.com ServerAdmin www@example.com
Next, set up an HTTP-only virtual host that you can use to check the status of mod_md.
<VirtualHost *:80> ServerName localhost <Location /> Require ip 127.0.0.1/8 ::1 </Location> <Location "/md-status"> SetHandler md-status </Location> </VirtualHost>
(Once Apache is running, you will be able to query it at any time as http://localhost/md-status.)
On to the actual website. First, you need to tell mod_md to manage certificates for it.
MDomain site.example.com
Next, set up a redirect from HTTP to HTTPS for everything except ACME challenge tokens.
<VirtualHost localhost:80> ServerName site.example.com RewriteEngine on RewriteRule "^/(?!.well-known/acme-challenge)(.*)" https://site.example.com/$1 [R=301,L] ErrorLog /www/site.example.com/logs/http-error.log CustomLog /www/site.example.com/logs/http-access.log combined </VirtualHost>
And finally, the site itself, including HSTS and strict SNI:
<VirtualHost *:443> ServerName site.example.com SSLEngine on SSLStrictSNIVHostCheck On Header always set Strict-Transport-Security "max-age=15552000; includeSubdomains;" DocumentRoot /www/site.example.com/data IncludeOptional /www/site.example.com/etc/*.conf ErrorLog /www/site.example.com/logs/https-error.log CustomLog /www/site.example.com/logs/https-access.log combined </VirtualHost>
Now start Apache and monitor the error log. You should see something like this pretty quickly:
[Sun Oct 10 16:15:27.450401 2021] [md:notice] [pid 12345] AH10059: The Managed Domain site.example.com has been setup and changes will be activated on next (graceful) server restart.
Once you do as it says (apachectl graceful
), your site will be up and running and you can head over to the Qualys SSL Server Test and admire your solid A+.
Download the sample configuration and try it out yourself.
by Dag-Erling Smørgrav atOctober 10, 2021 06:19 PM
by sjn atAugust 17, 2021 07:18 PM
Retten til privatlivets fred, retten til å reparere og retten til å velge verktøy er sider av samme sak. En ny rettsavgjørelse i Italia kan hjelpe oss å vinne tilbake rettigheter vi ble manipulert til å si fra oss.
Du tenker nok ikke på det så ofte, men om du er en vanlig IT-bruker i et industrialisert land har du sannsynligvis blitt lurt til å si fra deg rettigheter. Dette skjer i et slikt omfang at menneskerettsinteresserte burde være bekymret.
Tenk på når du skal ta i bruk noe du er interessert i, enten det er en datamaskin av noe slag som for eksempel PC, nettbrett eller telefon, eller en nettbasert tjeneste.
La oss først se nærmere på hva som skjer når du får ny datamaskin, nettbrett eller telefon i hus. Noe av det første som skjer etter at du har slått på strømmen for den nye enheten, og helt sikkert før du får mulighet til å bruke dingsen til det du ønsker å gjøre, er at du må godta en juridisk bindende avtale som er utformet av og for de som har produsert utstyret. For å kunne bruke det du har kjøpt, må du godta en avtale som styrer hva du kan bruke enheten til.
I mange tilfeller er det flere slike avtaler som blir presentert, hver med sin egen registrering av om du godtar eller ikke.
Noen av disse avtalene begrenser hva du kan bruke enheten til, mens andre gir leverandøren eller noen som samarbeider med leverandøren lov til å samle inn informasjon om deg og hva du foretar deg med enheten.
Mange av disse ja/nei-spørsmålene gir inntrykk av at du har mulighet til å nekte å godta, men du vil se at du sannsynligvis ikke kommer videre til å ha en gjenstand som er reelt brukbar til tiltenkt bruk før du har godtatt alle disse avtalene.
En av de mest tydelige konsekvensene av COVID 19-krisen er at en større andel av befolkningen ble presset over til nesten helt digital tilværelse, der kommunikasjon både i jobb- og skolesammenheng foregår via digitale enheter og via tjenester som leveres på vilkår av avtaler som er diktert av leverandørene. For noen av oss har tilværelsen vært nær heldigital i en årrekke allerede, men for mange er det en ny situasjon og det går langsomt opp for flere at viktige friheter og rettigheter kan være i ferd med å gå tapt.
Problemstillingen er ikke ny. Mange av oss i IT-miljøer har lenge advart mot at det vi regner som menneskerettigheter eller borgerrettigheter er i ferd med å bli gradvis slipt vekk til fordel for enkelte bedrifter og deres eiere.
Når du slår på en ny datamaskin eller telefon for første gang, blir du sannsynligvis nesten med en gang bedt om å godta en "sluttbrukerlisens" for operativsystemet, altså programvaren som styrer enheten. I sin enkleste form er en lisens et dokument som angir vilkårene for at noen andre enn den som har laget et åndsverk (her programvaren) får tillatelse til å lage eksemplarer av verket. Men i mange tilfeller inneholder lisensdokumentet mer detaljerte og omfattende vilkår. Ofte er lisensavtalen formulert som om du har rett til å avslå å bruke operativsystemet og slette eksemplarer som følger med eller levere tilbake fysiske eksemplarer og få tilbake pengene, men at du kan fortsette å bruke den fysiske maskinen. En del av oss som har kjøpt PCer og annet har vært i stand til å installere et annet system enn det som ble levert med maskinen, og valgt å leve det digitale livet ved hjelp av frie alternativer som for eksempel Linux eller OpenBSD. En del av oss gjør dette for å få mer direkte kontroll over verktøyene vi bruker.
Om vi har forsøkt å få tilbake penger for en ubrukt operativsystemlisens har de fleste av oss aldri klart å få det til. Men det skal vi komme tilbake til.
Om du har klart å installere et fritt alternativ til det operativsystemet som enheten ble levert med, har du slått et slag for retten til å velge verktøy og retten til å reparere og råde over dine egne eiendeler. Men dessverre er ikke dette det eneste punktet i ditt digitale liv der rettighetene dine er i fare.
Uansett om du godtok sluttbrukerlisensen eller ikke, kommer du fort ut for for programvare eller nettbaserte tjenester som presenterer sine egne sluttbrukeravtaler. Det er en stor sjanse for at du bare klikker OK uten å lese vilkårene i avtalen.
Ta gjerne nå en pause for å sjekke hva du faktisk har gått med på. Sannsynligvis finner du at både operativsystemleverandører og sosiale medier-tjenester har fått deg til å gi dem tillatelse til å registrere hva du foretar deg når du bruker systemet eller tjenesten. Ta gjerne tiden til å sjekke alle produkter og tjenester du har registrert deg hos. Det er sannsynlig at ikke bare en, men de aller fleste av de tjenestene og produktene du bruker på en nett-tilkoblet enhet har gitt seg selv retten til å fange inn og lagre data om hva du foretar deg. Hvis du bruker enheten til noe som helst privat eller følsomt, er det verd å se nøye etter hvilke konsekvenser disse avtalene har for din rett til privatliv og beskyttelse av privatsfæren.
På papiret (om vi skal uttrykke oss gammeldags) har vi som bor i EU og EØS-land rett til å få utlevert data som er lagret om oss og eventuelt få rettet feil eller til og med få slettet data i samsvar med EUs personvernforordning (GDPR). Hvis det du fant ut mens du sjekket avtalene mens du tok pause fra å lese denne teksten gjør deg usikker eller bekymret er det god grunn til å ta i bruk retten til innsyn, utlevering, retting eller sletting. Om du ikke får meningsfylt svar, ta kontakt med Datatilsynet eller Forbrukertilsynet, som bør stå klare til å hjelpe.
Men hva så med retten til å reparere eller retten til å velge verktøy? Jo, også på det feltet er det grunn til håp. Etter en omfattende prosess kom nemlig en domstol i Italia frem til at ikke bare hadde en Linux-entusiast rett til å installere Linux på sin nye Lenovo-datamaskin, slik at kunden også hadde rett til å refundert prisen for operativsystemet som ikke ville bli brukt. Og siden Lenovo hadde prøvd å ikke etterleve sine forpliktelser som var angitt i sluttbrukerlisensen som ble presentert for kunden, ble de ilagt en bot på 20 000 Euro.
En slik rettsavgjørelse er ikke direkte presedensskapende for andre europeiske land, og det finnes avgjørelser i andre land som ikke ga kunden medhold i at operativsystem og datamaskin kunne behandles som separate varer. Vi i den norske Unix-brukergruppen (Norwegian Unix User Group - NUUG) deltar nå i et samarbeid som koordineres av Free Software Foundation Europe (FSFE) for å forsvare og styrke din og min rett til privatliv, rett til å reparere og rett til å velge verktøy for å styre vår digitale tilværelse.
Hvis noe av det du nå har lest bekymrer deg, gjør deg forvirret, sint eller bare engasjert for å styrke våre borger- og menneskeretter i den digitale tilværelsen vil vi bli glade for å høre fra deg.
Peter N. M. Hansteen
Styreleder i Norwegian Unix User Group (NUUG)
Den italienske rettsavgjørelsen som gir oss håp er beskrevet på FSFEs nettsted: Refund of pre-installed Windows: Lenovo must pay 20,000 euros in damages
An English version is available as Are you aware what you lose by just clicking OK to get started using something?
We have a Samsung Smart TV, I like it. We also have a cable box, a BluRay player (because sometimes we borrow movies at the library and anyone in the family needs to be able to play them with no help from dad), and a Chromecast. All three HDMI inputs on the TV are used.
Samsung was pretty big on DLNA, a UPnP based protocol for media playback. Now it's not a feature they tout a lot: People want Netflix. Samsung also had very good codec support from the start which reduces the need for transcoding greatly. They still have the DLNA client builtin in their TVs and the codec support is even better now. So this solves the in-house streaming problem very neatly without needing a extra box by the TV to do it. The server in the basement ought to be enough when the TV had DLNA.
I understand that DLNA is a bit low featured in 2021: no on screen movie/episode synopsis, not very slick navigation and on screen controls for everything. But it's enough. DLNA is right featured. The Samsung DLNA client has very nice movie navigation and on screen menus for subtitle selection. And I don't particularly want a TV box though it would certainly play all kinds of video with no issues at all and support subs better and have a nice graphical interface. It seems I'm not in tune with the world on this though: DLNA servers seem to be more rare than before.
Since the start around 2010 I've used tvMobili, a closed source but fully functional DLNA server software that just worked with our TV, including on screen subtitles without transcoding. It also just worked with phone based DLNA software. Since my wife is deaf and not all the kids are that steady in english yet subtitles are a required feature. Subtitles are also nice for late night TV viewing with low volume.
Recently the disk the tvMobili software stored its database on went full and the database went corrupt. I tidied the disk and nuked the database. Unfortunately that reset the software and it gave me it's out of the box experience: "Please create your account at tvMobili to proceed". tvMobili, the company, gave up in 2013. There was no registration service running any more and I could not get past the registration screen. And also I had no backup of that directory since it only contained stuff I could reconstruct anyway. I thought. (I realize I might try to write the registration service and run it locally, but maybe time to try something new?).
So what to do for in-house streaming now then? This is important to me:
The candidates.
by nicolai (noreply@blogger.com) atMay 11, 2021 12:54 PM
wtf, zsh
% uname -sr FreeBSD 12.1-RELEASE-p10 % for sh in sh csh bash zsh ; do printf "%-8s" $sh ; $sh -c 'echo \\x21' ; done sh \x21 csh \x21 bash \x21 zsh ! % cowsay wtf, zsh __________ < wtf, zsh > ---------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
I mean. Bruh. I know it’s intentional & documented & can be turned off, but every other shell defaults to POSIX semantics…
BTW:
% ln -s =zsh /tmp/sh % /tmp/sh -c 'echo \x21' \x21
by Dag-Erling Smørgrav atSeptember 22, 2020 01:11 PM
by sjn atJuly 10, 2020 04:25 PM
Det finnes millioner av bøker der vernetiden er utløpt. Noen av dem er norske bøker, og endel av dem finnes ikke tilgjengelig digitalt. For å forsøke å gjøre noe med det siste, har NUUG vedtatt å få bygget en bokskanner. Utformingen er basert på en enkel variant i plast (byggeinstrukser), men vil bli laget i aluminium for lengre levetid.
Oppdraget med å bygge scanneren er gitt til våre venner i Oslo Sveisemek, som er godt igang med arbeidet. Her ser du en skisse over konstruksjonen:
Grunnrammen er montert, men det gjenstår fortsatt en god del:
Tanken er at medlemmer og andre skal kunne låne eller leie bokskanner ved behov, og de av oss som er interessert kan gå igang med å digitalisere bøker med OCR og pågangsmot. Ta kontakt med aktive (at) nuug.no hvis dette er noe for deg, eller stikk innom #nuug.
(Fotograf er Jonny Birkelund)
Hvis vi hadde laget et program som oversatte fra norsk til samisk, ville resultatet ha vært en samisk som er minst like dårlig som den norsken vi er i stand til å lage nå. Norsk og samisk er grammatisk sett svært ulike, og det er vanskelig å få til god samisk på grunnlag av norsk. Et slikt program vil føre til publisering av en hel masse svært dårlig samisk. En situasjon der mesteparten av all samisk publisert på internett kommer fra våre program fortoner seg som et mareritt. Det ville rett og slett ha ødelagt den samiske skriftkulturen.
Sjå kronikken: https://www.nordnorskdebatt.no/samisk-sprak/digitalisering/facebook/kan-samisk-brukes-i-det-offentlige-rom/o/5-124-48030
by unhammer atMay 31, 2018 09:00 AM
Following up on the CentOS 7 root filesystem on tmpfs post, here comes a guide on how to run a ZFS enabled CentOS 7 NAS server (with the operating system) from tmpfs.
The disk image is built in macOS using Packer and VirtualBox. Virtualbox is installed using the appropriate platform package that is downloaded from their website, and Packer is installed using brew:
$ brew install packer
Three files are needed in order to build the disk image; a Packer template file, an Anaconda kickstart file and a shell script that is used to configure the disk image after installation. The following files can be used as examples:
template.json
(Packer template example file)ks.cfg
(Anaconda kickstart example file)provision.sh
(Provision shell script example file)Create some directories:
$ mkdir ~work/centos-7-zfs/
$ mkdir ~work/centos-7-zfs/http/
$ mkdir ~work/centos-7-zfs/scripts/
Copy the files to these directories:
$ cp template.json ~work/centos-7-zfs/
$ cp ks.cfg ~work/centos-7-zfs/http/
$ cp provision.sh ~work/centos-7-zfs/scripts/
Modify each of the files to fit your environment.
Start the build process using Packer:
$ cd ~work/centos-7-zfs/
$ packer build template.json
This will download the CentOS 7 ISO file, start an HTTP server to serve the kickstart file and start a virtual machine using Virtualbox:
The virtual machine will boot into Anaconda and run through the installation process as specified in the kickstart file:
When the installation process is complete, the disk image will be available in the output-virtualbox-iso
folder with the vmdk
extension.
The disk image is now ready to be put in initramfs.
This section is quite similar to the previous blog post CentOS 7 root filesystem on tmpfs but with minor differences. For simplicity reasons it is executed on a host running CentOS 7.
Create the build directories:
$ mkdir /work
$ mkdir /work/newroot
$ mkdir /work/result
Export the files from the disk image to one of the directories we created earlier:
$ export LIBGUESTFS_BACKEND=direct
$ guestfish --ro -a packer-virtualbox-iso-1508790384-disk001.vmdk -i copy-out / /work/newroot/
Modify /etc/fstab
:
$ cat > /work/newroot/etc/fstab << EOF
tmpfs / tmpfs defaults,noatime 0 0
none /dev devtmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
EOF
Disable selinux:
echo "SELINUX=disabled" > /work/newroot/etc/selinux/config
Disable clearing the screen on login failure to make it possible to read any error messages:
mkdir /work/newroot/etc/systemd/system/getty@.service.d
cat > /work/newroot/etc/systemd/system/getty@.service.d/noclear.conf << EOF
[Service]
TTYVTDisallocate=no
EOF
Now jump to the Initramfs and Result sections in the CentOS 7 root filesystem on tmpfs and follow those steps until the end when the result is a vmlinuz
and initramfs
file.
The first time the NAS server boots on the disk image, the ZFS storage pool and volumes will have to be configured. Refer to the ZFS documentation for information on how to do this, and use the following command only as guidelines.
Create the storage pool:
$ sudo zpool create data mirror sda sdb mirror sdc sdd
Create the volumes:
$ sudo zfs create data/documents
$ sudo zfs create data/games
$ sudo zfs create data/movies
$ sudo zfs create data/music
$ sudo zfs create data/pictures
$ sudo zfs create data/upload
Share some volumes using NFS:
zfs set sharenfs=on data/documents
zfs set sharenfs=on data/games
zfs set sharenfs=on data/music
zfs set sharenfs=on data/pictures
Print the storage pool status:
$ sudo zpool status
pool: data
state: ONLINE
scan: scrub repaired 0B in 20h22m with 0 errors on Sun Oct 1 21:04:14 2017
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdd ONLINE 0 0 0
sdc ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errors
Mimes brønn er en nettjeneste som hjelper deg med å be om innsyn i offentlig forvaltning i tråd med offentleglova og miljøinformasjonsloven. Tjenesten har et offentlig tilgjengelig arkiv over alle svar som er kommet på innsynsforespørsler, slik at det offentlige kan slippe å svare på de samme innsynshenvendelsene gang på gang. Du finner tjenesten på
I følge gammel nordisk mytologi voktes kunnskapens kilde av Mime og ligger under en av røttene til verdenstreet Yggdrasil. Å drikke av vannet i Mimes brønn ga så verdifull kunnskap og visdom at den unge guden Odin var villig til å gi et øye i pant og bli enøyd for å få lov til å drikke av den.
Nettstedet vedlikeholdes av foreningen NUUG og er spesielt godt egnet for politisk interesserte personer, organisasjoner og journalister. Tjenesten er basert på den britiske søstertjenesten WhatDoTheyKnow.com, som allerede har gitt innsyn som har resultert i dokumentarer og utallige presseoppslag. I følge mySociety for noen år siden gikk ca 20 % av innsynshenvendelsene til sentrale myndigheter via WhatDoTheyKnow. Vi i NUUG håper NUUGs tjeneste Mimes brønn kan være like nyttig for innbyggerne i Norge.
I helgen ble tjenesten oppdatert med mye ny funksjonalitet. Den nye utgaven fungerer bedre på små skjermer, og viser nå leveringsstatus for henvendelsene slik at innsender enklere kan sjekke at mottakers epostsystem har bekreftet mottak av innsynshenvendelsen. Tjenesten er satt opp av frivillige i foreningen NUUG på dugnad, og ble lansert sommeren 2015. Siden den gang har 121 brukere sendt inn mer enn 280 henvendelser om alt fra bryllupsutleie av Operaen og forhandlinger om bruk av Norges topp-DNS-domene .bv til journalføring av søknader om bostøtte, og nettstedet er en liten skattekiste av interessant og nyttig informasjon. NUUG har knyttet til seg jurister som kan bistå med å klage på manglende innsyn eller sviktende saksbehandling.
– «NUUGs Mimes brønn var uvurderlig da vi lyktes med å sikre at DNS-toppdomenet .bv fortsatt er på norske hender,» forteller Håkon Wium Lie.
Tjenesten dokumenterer svært sprikende praksis i håndtering av innsynshenvendelser, både når det gjelder responstid og innhold i svarene. De aller fleste håndteres raskt og korrekt, men det er i flere tilfeller gitt innsyn i dokumenter der ansvarlig etat i ettertid ønsker å trekke innsynet tilbake, og det er gitt innsyn der sladdingen har vært utført på en måte som ikke skjuler informasjonen som skal sladdes.
– «Offentlighetsloven er en bærebjelke for vårt demokrati. Den bryr seg ikke med hvem som ber om innsyn, eller hvorfor. Prosjektet Mimes brønn innebærer en materialisering av dette prinsippet, der hvem som helst kan be om innsyn og klage på avslag, og hvor dokumentasjon gjøres offentlig. Dette gjør Mimes Brønn til et av de mest spennende åpenhetsprosjektene jeg har sett i nyere tid.» forteller mannen som fikk åpnet opp eierskapsregisteret til skatteetaten, Vegard Venli.
Vi i foreningen NUUG håper Mimes brønn kan være et nyttig verktøy for å holde vårt demokrati ved like.
by Mimes Brønn atFebruary 13, 2017 02:07 PM
Several years ago I wrote a series of posts on how to run EL6 with its root filesystem on tmpfs. This post is a continuation of that series, and explains step by step how to run CentOS 7 with its root filesystem in memory. It should apply to RHEL, Ubuntu, Debian and other Linux distributions as well. The post is a bit terse to focus on the concept, and several of the steps have potential for improvements.
The following is a screen recording from a host running CentOS 7 in tmpfs:
A build host is needed to prepare the image to boot from. The build host should run CentOS 7 x86_64, and have the following packages installed:
yum install libvirt libguestfs-tools guestfish
Make sure the libvirt daemon is running:
systemctl start libvirtd
Create some directories that will be used later, however feel free to relocate these to somewhere else:
mkdir -p /work/initramfs/bin
mkdir -p /work/newroot
mkdir -p /work/result
For simplicity reasons we’ll fetch our rootfs from a pre-built disk image, but it is possible to build a custom disk image using virt-manager. I expect that most people would like to create their own disk image from scratch, but this is outside the scope of this post.
Use virt-builder
to download a pre-built CentOS 7.3 disk image and set the root password:
virt-builder centos-7.3 -o /work/disk.img --root-password password:changeme
Export the files from the disk image to one of the directories we created earlier:
guestfish --ro -a /work/disk.img -i copy-out / /work/newroot/
Clear fstab since it contains mount entries that no longer apply:
echo > /work/newroot/etc/fstab
SELinux will complain about incorrect disk label at boot, so let’s just disable it right away. Production environments should have SELinux enabled.
echo "SELINUX=disabled" > /work/newroot/etc/selinux/config
Disable clearing the screen on login failure to make it possible to read any error messages:
mkdir /work/newroot/etc/systemd/system/getty@.service.d
cat > /work/newroot/etc/systemd/system/getty@.service.d/noclear.conf << EOF
[Service]
TTYVTDisallocate=no
EOF
We’ll create our custom initramfs from scratch. The boot procedure will be, simply put:
/init
(in the initramfs).tmpfs
mount point.tmpfs
mount point.switch_root
to boot on the CentOS 7 root filesystem.The initramfs will be based on BusyBox. Download a pre-built binary or compile it from source, put the binary in the initramfs/bin
directory. In this post I’ll just download a pre-built binary:
wget -O /work/initramfs/bin/busybox https://www.busybox.net/downloads/binaries/1.26.1-defconfig-multiarch/busybox-x86_64
Make sure that busybox
has the execute bit set:
chmod +x /work/initramfs/bin/busybox
Create the file /work/initramfs/init
with the following contents:
#!/bin/busybox sh
# Dump to sh if something fails
error() {
echo "Jumping into the shell..."
setsid cttyhack sh
}
# Populate /bin with binaries from busybox
/bin/busybox --install /bin
mkdir -p /proc
mount -t proc proc /proc
mkdir -p /sys
mount -t sysfs sysfs /sys
mkdir -p /sys/dev
mkdir -p /var/run
mkdir -p /dev
mkdir -p /dev/pts
mount -t devpts devpts /dev/pts
# Populate /dev
echo /bin/mdev > /proc/sys/kernel/hotplug
mdev -s
mkdir -p /newroot
mount -t tmpfs -o size=1500m tmpfs /newroot || error
echo "Extracting rootfs... "
xz -d -c -f rootfs.tar.xz | tar -x -f - -C /newroot || error
mount --move /sys /newroot/sys
mount --move /proc /newroot/proc
mount --move /dev /newroot/dev
exec switch_root /newroot /sbin/init || error
Make sure it is executable:
chmod +x /work/initramfs/init
Create the root filesystem archive using tar
. The following command also uses xz compression to reduce the final size of the archive (from approximately 1 GB to 270 MB):
cd /work/newroot
tar cJf /work/initramfs/rootfs.tar.xz .
Create initramfs.gz
using:
cd /work/initramfs
find . -print0 | cpio --null -ov --format=newc | gzip -9 > /work/result/initramfs.gz
Copy the kernel directly from the root filesystem using:
cp /work/newroot/boot/vmlinuz-*x86_64 /work/result/vmlinuz
The /work/result
directory now contains two files with file sizes similar to the following:
ls -lh /work/result/
total 277M
-rw-r--r-- 1 root root 272M Jan 6 23:42 initramfs.gz
-rwxr-xr-x 1 root root 5.2M Jan 6 23:42 vmlinuz
These files can be loaded directly in GRUB from disk, or using iPXE over HTTP using a script similar to:
#!ipxe
kernel http://example.com/vmlinuz
initrd http://example.com/initramfs.gz
boot
Mimes brønn har nå vært oppe i rundt et år. Derfor vi tenkte det kunne være interessant å få en kortfattet statistikk om hvordan tjenesten er blitt brukt.
I begynnelsen av juli 2016 hadde Mimes brønn 71 registrerte brukere som hadde sendt ut 120 innsynshenvendelser, hvorav 62 (52%) var vellykkede, 19 (16%) delvis vellykket, 14 (12%) avslått, 10 (8%) fikk svar at organet ikke hadde informasjonen, og 12 henvendelser (10%; 6 fra 2016, 6 fra 2015) fortsatt var ubesvarte. Et fåtall (3) av hendvendelsene kunne ikke kategoriseres. Vi ser derfor at rundt to tredjedeler av henvendelsene var vellykkede, helt eller delvis. Det er bra!
Tiden det tar før organet først sender svar varierer mye, fra samme dag (noen henvendelser sendt til Utlendingsnemnda, Statens vegvesen, Økokrim, Mediatilsynet, Datatilsynet, Brønnøysundregistrene), opp til 6 måneder (Ballangen kommune) eller lenger (Stortinget, Olje- og energidepartementet, Justis- og beredskapsdepartementet, UDI – Utlendingsdirektoratet, og SSB har mottatt innsynshenvendelser som fortsatt er ubesvarte). Gjennomsnittstiden her var et par uker (med unntak av de 12 tilfellene der det ikke har kommet noe svar). Det følger av offentlighetsloven § 29 første ledd at henvendelser om innsyn i forvaltningens dokumenter skal besvares «uten ugrunnet opphold», noe som ifølge Sivilombudsmannen i de fleste tilfeller skal fortolkes som «samme dag eller i alle fall i løpet av 1-3 virkedager». Så her er det rom for forbedring.
Klageretten (offentleglova § 32) ble benyttet i 20 av innsynshenvendelsene. I de fleste (15; 75%) av tilfellene førte klagen til at henvendelsen ble vellykket. Gjennomsnittstiden for å få svar på klagen var en måned (med unntak av 2 tillfeller, klager sendt til Statens vegvesen og Ruter AS, der det ikke har kommet noe svar). Det er vel verdt å klage, og helt gratis! Sivilombudsmannen har uttalt at 2-3 uker ligger over det som er akseptabel saksbehandlingstid for klager.
Flest henvendelser var blitt sendt til Utenriksdepartementet (9), tett etterfulgt av Fredrikstad kommune og Brønnøysundregistrene. I alt ble henvendelser sendt til 60 offentlige myndigheter, hvorav 27 ble tilsendt to eller flere. Det står over 3700 myndigheter i databasen til Mimes brønn. De fleste av dem har dermed til gode å motta en innsynshenvendelse via tjenesten.
Når vi ser på hva slags informasjon folk har bedt om, ser vi et bredt spekter av interesser; alt fra kommunens parkeringsplasser, reiseregninger der statens satser for overnatting er oversteget, korrespondanse om asylmottak og forhandlinger om toppdomenet .bv, til dokumenter om Myanmar.
Myndighetene gjør alle mulige slags ting. Noe av det gjøres dÃ¥rlig, noe gjør de bra. Jo mer vi finner ut om hvordan myndighetene fungerer, jo større mulighet har vi til Ã¥ foreslÃ¥ forbedringer pÃ¥ det som fungerer dÃ¥rlig… og applaudere det som bra. Er det noe du vil ha innsyn i, sÃ¥ er det bare Ã¥ klikke pÃ¥ https://www.mimesbronn.no/ og sÃ¥ er du i gang
by Mimes Brønn atJuly 15, 2016 03:56 PM
Twitter-brukaren @IngeborgSteine fekk nyleg ein del merksemd då ho tvitra eit bilete av nynorskutgåva av økonomieksamenen sin ved NTNU:
Dette var min økonomieksamen på "nynorsk". #nynorsk #noregsmållag #kvaialledagar https://t.co/RjCKSU2Fyg—
Ingeborg Steine (@IngeborgSteine) May 30, 2016
Kreative nyvinningar som *kvisleis og alle dialektformene og arkaismane ville vore usannsynlege å få i ei maskinomsett utgåve, så då lurte eg på kor mykje betre/verre det hadde blitt om eksaminatoren rett og slett hadde brukt Apertium i staden? Ingeborg Steine var så hjelpsam at ho la ut bokmålsutgåva, så då får me prøva
Ingen kvisleis og fritt for tær og fyr, men det er heller ikkje perfekt: Visse ord manglar frå ordbøkene og får dermed feil bøying, teller blir tolka som substantiv, ein anna maskin har feil bøying på førsteordet (det mangla ein regel der) og at blir ein stad tolka som adverb (som fører til det forunderlege fragmentet det verta at anteke tilvarande). I tillegg blir språket gjenkjent som tatarisk av nettsida, så det var kanskje litt tung norsk? Men desse feila er ikkje spesielt vanskelege å retta på – utviklingsutgåva av Apertium gir no:
Det er enno eit par småting som kunne vore retta, men det er allereie betre enn dei fleste eksamenane eg fekk utdelt ved UiO …
by unhammer atJune 01, 2016 09:45 AM
by Anders (noreply@blogger.com) atOctober 18, 2015 01:09 PM
One of the biggest takeaways from 31C3 and the most recent Snowden-leaked NSA documents is that a lot of SSH stuff is .. broken.
I’m not surprised, but then again I never am when it comes to this paranoia stuff. However, I do run a ton of SSH in production and know a lot of people that do. Are we all fucked? Well, almost, but not really.
Unfortunately most of what Stribika writes about the “Secure Secure Shell” doesn’t work for old production versions of SSH. The cliff notes for us real-world people, who will realistically be running SSH 5.9p1 for years is hidden in the bettercrypto.org repo.
Edit your /etc/ssh/sshd_config
:
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160
KexAlgorithms diffie-hellman-group-exchange-sha256
Basically the nice and forward secure aes-*-gcm chacha20-poly1305 ciphers, the curve25519-sha256 Kex algorithm and Encrypt-Then-MAC message authentication modes are not available to those of us stuck in the early 2000s. That’s right, provably NSA-proof stuff not supported. Upgrading at this point makes sense.
Still, we can harden SSH, so go into /etc/ssh/moduli and delete all the moduli that have 5th column < 2048, and disable ECDSA host keys:
cd /etc/ssh mkdir -p broken mv moduli ssh_host_dsa_key* ssh_host_ecdsa_key* ssh_host_key* broken awk '{ if ($5 > 2048){ print } }' broken/moduli > moduli # create broken links to force SSH not to regenerate broken keys ln -s ssh_host_ecdsa_key ssh_host_ecdsa_key ln -s ssh_host_dsa_key ssh_host_dsa_key ln -s ssh_host_key ssh_host_key
Your clients, which hopefully have more recent versions of SSH, could have the following settings in /etc/ssh/ssh_config
or .ssh/config
:
Host all-old-servers Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-ripemd160 KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Note: Sadly, the -ctr ciphers do not provide forward security and hmac-ripemd160 isn’t the strongest MAC. But if you disable these, there are plenty of places you won’t be able to connect to. Upgrade your servers to get rid of these poor auth methods!
There, done.
Updated Jan 6th to highlight the problems of not upgrading SSH.
Updated Jan 22nd to note CTR mode isn’t any worse.
Go learn about COMSEC if you didn’t get trolled by the title.
by kacper atJanuary 06, 2015 04:33 PM
Intermission..
Recently I been doing some video editing.. less editing than tweaking my system tho.
If you want your jack output to speak with Kdenlive, a most excellent video editing suite,
and output audio in a nice way without choppyness and popping, which I promise you is not nice,
you’ll want to pipe it through pulseaudio because the alsa to jack stuff doesn’t do well with phonom, at least not on this convoluted setup.
Remember, to get that setup to work, ALSA pipes to jack with the pcm.jack { type jack ..
thing, and remove the alsa to pulseaudio stupidity at /usr/share/alsa/alsa.conf.d/50-pulseaudio.conf
So, once that’s in place, it won’t play even though Pulse found your Jack because your clients are defaulting out on some ALSA device… this is when you change /etc/pulse/client.conf
and set default-sink = jack_out
.
by kacper atDecember 08, 2014 12:18 AM
$typedef = 'A8 A16 A16 L'; $sizeof = length pack($typedef, () ); while ( read(WTMP, $buffer, $sizeof) == $sizeof ) { ($line, $user, $host, $time) = unpack($typedef, $buffer); # Gjør hva du vil med disse verdiene her }FreeBSD bruker altså bare verdiene line (ut_line), user (ut_name), host (ut_host) og time (ut_time), jfr. utmp.h. Linux (x64, hvem bryr seg om 32-bit?) derimot, lagrer en hel del mer i wtmp-loggen, og etter en del Googling, prøving/feiling og kikking i bits/utmp.h kom jeg frem til:
$typedef = "s x2 i A32 A4 A32 A256 s2 l i2 i4 A20"; $sizeof = length pack($typedef, () ); while ( read(WTMP, $buffer, $sizeof) == $sizeof ) { ($type, $pid, $line, $id, $user, $host, $term, $exit, $session, $sec, $usec, $addr, $unused) = unpack($typedef, $buffer); # Gjør hva du vil med disse verdiene her }Som bare funker, flott altså. Da ser jeg i sanntid brukere som logger på og av, og kan ta handlinger basert på dette.
A complete feed is available in any of your favourite syndication formats linked by the buttons below.