Update 2020-02-29: For completeness and because I felt that an unsophisticated attack like the present one deserves a thorough if unsophisticated analysis, I decided to take a look at the log data for the entire 7 day period, post-rotation.
So here comes some armchair analysis, using only the tools you will find in the base system of your OpenBSD machine or any other running a sensibly stocked unix-like operating systen. We start with finding the total number of delivery attempts logged where we have the body text 'am a hacker' (this would show up only after a sender has been blacklisted, so the gross number actual delivery attempts will likely be a tad higher), with the command
zgrep "am a hacker" /var/log/spamd.0.gz | awk '{print $6}' | wc -l
which tells us the number is 3372.
Next up we use a variation of the same command to extract the source IP addresses of the log entries that contain the string 'am a hacker', sort the result while also removing duplicates and store the end result in an environment variable called lastweek:
export lastweek=`zgrep "am a hacker" /var/log/spamd.0.gz | awk '{print $6}' | tr -d ':' | sort -u `
With our list of IP addresses tucked away in the environment variable go on to: For each IP address in our lastweek set, extract all log entries and store the result (still in crude sort order by IP address), in the file 2020-02-29_i_am_hacker.raw.txt:
for foo in $lastweek ; do zgrep $foo /var/log/spamd.0.gz | tee -a 2020-02-09_i_am_hacker.raw.txt ; done
For reference I kept the list of unique IP addresses (now totalling 231) around too.
Next, we are interested in extracting the target email addresses, so the command
grep "To:" 2020-02-29_i_am_hacker.raw.txt | awk '{print substr($0,index($0,$8))}' | sort -u
finds the lines in our original extract containing "To:", and gives us the list of target addresses the sources in our data set tried to deliver mail to.
The result is preserved as 2020-02-29_i_am_hacker.raw_targets.txt, a total of 236 addresses, mostly but not all in domains we actually host here. One surprise was that among the target addresses one actually invalid address turned up that was not at that time yet a spamtrap. See the end of the activity log for details (it also turned out to be the last SMTP entry in that log for 2020-02-29).
This little round of armchair analysis on the static data set confirms the conclusions from the original article: Apart from the possibly titillating aspects of the "adult" web site mentions and the attempt at playing on the target's potential shamefulness over specific actions, as spam campaigns go, this one is ordinary to the point of being a bit boring.
There may well be other actors preying on higher-value targets through their online clumsiness and known peculiarities of taste in an actually targeted fashion, but this is not it.
A final note on tools: In this article, like all previous entries, I have exclusively used the tools you will find in the OpenBSD (or other sensibly put together unixlike operating system) base system or at a stretch as an easily available package.
For the simpler, preliminary investigations and poking around like we have done here, the basic tools in the base system are fine. But if you will be performing log analysis at scale or with any regularity for purposes that influences your career path, I would encourage you to look into setting up a proper, purpose-built log analysis system.
Several good options, open source and otherwise, are available. I will not recommend or endorse any specific one, but when you find one that fits your needs and working style you will find that after the initial setup and learning period it will save you significant time.
As per my practice, only material directly relevant to the article itself has been published via the links. If you are a professional practitioner or researcher with who can state a valid reason to need access to unpublished material, please let me know and we will discuss your project.
Update 2020-03-02: I knew I had some early samples of messages that did make it to an inbox near me squirreled away somewhere, and after a bit of rummaging I found them, stored here (note the directory name, it seemed so obvious and transparent even back then). It appears that the oldest intact messages I have are from December 2018. I am sure earlier examples can be found if we look a littler harder.
Update 2020-03-17: A fresh example turned up this morning, addressed to (of all things) the postmaster account of one of our associated .no domains, written in Norwegian (and apparently generated with Microsoft Office software). The preserved message can be downloaded here.
Update 2020-05-10: While rummaging about (aka 'researching') for something else I noticed that spamd logs were showing delivery attempts for messages with the subject "High level of danger. Your account was under attack." So out of idle curiosity on an early Sunday afternoon, I did the following:
$ export muggles=`grep " High level of danger." /var/log/spamd | awk '{print $6}' | tr -d ':' | sort -u`
$ for foo in $muggles; do grep $foo /var/log/spamd >>20200510-muggles ; done
and the result is preserved for your entertainment and/or enlightenment here. Not much to see, really other than that they sent the message in two language varieties, and to a small subset of our imaginary friends.
Update 2020-08-13: Here is another snapshot of activity from August 12 and 13: this file preserves the activity of 19 different hosts, and as we can see that since they targeted our imaginary friends first, it is unlikely they reached any inboxes here. Some of these campaigns may have managed to reach users elsewhere, though
Update 2020-09-06: Occasionally these messages manage to hit a mailbox here. Apparently enough Norwegians fall for these scams that Norwegian language versions (not terribly well worded) get aimed at users here. This example, aimed at what has only ever been an email alias made it here, slipping through by a stroke of luck during a time that IP address was briefly not in the spamd-greytrap list here, as can be seen from this log excerpt. It is also worth noting that an identically phrased message was sent from another IP address to mailer-daemon@ for one of the domains we run here.
Update 2021-01-06: For some reason, a new variant turned up here today (with a second message a few minutes later and then a third), addressed to a generic contact address here. A very quick check of logs here only turned up only this indication of anything similar (based on a search for the variant spelling PRONOGRAPHIC), but feel free to check your own logs based on these samples if you like.
Update 2021-01-16: One more round, this for my Swedish alter ego. Apparently sent from a poorly secured Vietnamese system.
Update 2021-01-18: A Norwegian version has surfaced, attempted sent to approximately 115 addresses in .no domains we handle, fortunately the majority of the addresses targeted were in fact spamtraps, as this log extract shows.
Update 2021-03-03: After a few quiet weeks, another campaign started swelling our greytrapped hosts collection, as this hourly count of IP addresses in the traplist at dump to file time shows:
Tue Mar 2 21:10:01 CET 2021 : 2425
Tue Mar 2 22:10:01 CET 2021 : 4014
Tue Mar 2 23:10:01 CET 2021 : 4685
Wed Mar 3 00:10:01 CET 2021 : 4847
Wed Mar 3 01:10:01 CET 2021 : 5759
Wed Mar 3 02:10:01 CET 2021 : 6560
Wed Mar 3 03:10:01 CET 2021 : 6774
Wed Mar 3 04:10:01 CET 2021 : 7997
Wed Mar 3 05:10:01 CET 2021 : 8231
Wed Mar 3 06:10:01 CET 2021 : 8499
Wed Mar 3 07:10:01 CET 2021 : 9910
Wed Mar 3 08:10:01 CET 2021 : 10240
Wed Mar 3 09:10:01 CET 2021 : 11872
Wed Mar 3 10:10:01 CET 2021 : 12255
Wed Mar 3 11:10:01 CET 2021 : 13689
Wed Mar 3 12:10:01 CET 2021 : 14181
Wed Mar 3 13:10:01 CET 2021 : 15259
Wed Mar 3 14:10:01 CET 2021 : 15881
Wed Mar 3 15:10:02 CET 2021 : 17061
Wed Mar 3 16:10:01 CET 2021 : 17625
Wed Mar 3 17:10:01 CET 2021 : 18758
Wed Mar 3 18:10:01 CET 2021 : 19170
Wed Mar 3 19:10:01 CET 2021 : 20028
Wed Mar 3 20:10:01 CET 2021 : 20578
Wed Mar 3 21:10:01 CET 2021 : 20997
by Peter N. M. Hansteen (noreply@blogger.com) atMarch 03, 2021 08:30 PM
I have neglected the Valutakrambod library for a while, but decided this weekend to give it a face lift. I fixed a few minor glitches in several of the service drivers, where the API had changed since I last looked at the code. I also added support for fetching the order book from the newcomer Norwegian Bitcoin Exchange.
I alsod decided to migrate the project from github to gitlab in the process. If you want a python library for talking to various currency exchanges, check out code for valutakrambod.
This is what the output from 'bin/btc-rates-curses -c' looked like a few minutes ago:
Name Pair Bid Ask Spread Ftcd Age Freq Bitfinex BTCEUR 39229.0000 39246.0000 0.0% 44 44 nan Bitmynt BTCEUR 39071.0000 41048.9000 4.8% 43 74 nan Bitpay BTCEUR 39326.7000 nan nan% 39 nan nan Bitstamp BTCEUR 39398.7900 39417.3200 0.0% 0 0 1 Bl3p BTCEUR 39158.7800 39581.9000 1.1% 0 nan 3 Coinbase BTCEUR 39197.3100 39621.9300 1.1% 38 nan nan Kraken+BTCEUR 39432.9000 39433.0000 0.0% 0 0 0 Paymium BTCEUR 39437.2100 39499.9300 0.2% 0 2264 nan Bitmynt BTCNOK 409750.9600 420516.8500 2.6% 43 74 nan Bitpay BTCNOK 410332.4000 nan nan% 39 nan nan Coinbase BTCNOK 408675.7300 412813.7900 1.0% 38 nan nan MiraiEx BTCNOK 412174.1800 418396.1500 1.5% 34 nan nan NBX BTCNOK 405835.9000 408921.4300 0.8% 33 nan nan Bitfinex BTCUSD 47341.0000 47355.0000 0.0% 44 53 nan Bitpay BTCUSD 47388.5100 nan nan% 39 nan nan Coinbase BTCUSD 47153.6500 47651.3700 1.0% 37 nan nan Gemini BTCUSD 47416.0900 47439.0500 0.0% 36 336 nan Hitbtc BTCUSD 47429.9900 47386.7400 -0.1% 0 0 0 Kraken+BTCUSD 47401.7000 47401.8000 0.0% 0 0 0 Exchangerates EURNOK 10.4012 10.4012 0.0% 38 76236 nan Norgesbank EURNOK 10.4012 10.4012 0.0% 31 76236 nan Bitstamp EURUSD 1.2030 1.2045 0.1% 2 2 1 Exchangerates EURUSD 1.2121 1.2121 0.0% 38 76236 nan Norgesbank USDNOK 8.5811 8.5811 0.0% 31 76236 nan
Yes, I notice the negative spread on Hitbtc. Either I fail to understand their Websocket API or they are sending bogus data. I've seen the same with Kraken, and suspect there is something wrong with the data they send.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
If you do not want a domain to receive any mail, there is a way to be at last somewhat civil about it. There's a different DNS trick for that.
It used to be that if you went to the trouble of registering a domain, one of the duties that came with it was set up somewhere to receive mail.
A number of networking professionals, myself included, have been know to insist that not only should a valid domain receive mail, at least a significant subset of the identities listed in RFC2142 (dated May 1997) should exist and mail sent there should be read at some reasonable interval.
Then of course we all know that a number of things happened in networking in the years between 1997 and today.
As regular or returning readers of this column will be aware, one of the phenomena that rose to become a prominent irritation and possible risk factor was spam, otherwise known as unsolicited commercial email, and of course some of the unsolicited traffic carried payloads that were part of various kinds of criminal activity.
I have written fairly extensively on how to suppress spam and other malicious traffic and have fun doing so, all the while assuming that if you run a domain you will want at least some mail to have a chance of making it to an inbox that is actually read by a person or perhaps processed by your robotic underlings.
Then there is that other consideration that with the proliferation of top level domains means that organizations that own trademarks and would in the early days see the need only for .com or .net domain (the latter was in fact originally intended for organizations involved in networking) or perhaps a country domain such as a .no or .se one would tend to hoard domains in other top level domains too.
There are of course those who try to exploit trademark protection too, as we have seen in among other things my brush with a certain Chinese registrar or that time when what could only be seen as an extortion attempt a little too forcefully telemarketed landed me an otherwise white-elephant .se domain.
Now with the combination of potentially for most practical purposes redundant domains and the likely burden of handling spam for the same, it is understandable that attitudes started to shift. Finally in June 2015 RFC7505 was issued, with a simple and practical solution, dubbed the NULL MX record. The RFC explains how to set one up, though in language that is not too easy to penetrate.
For any domain that runs a mail service, there should be at least one MX record. Looking up, say, bsdly.net with dig bsdly.net mx yields a response where the answer section gives
-- also preserved as a screenshot -I would add a dmarc with p=reject too
— Simon (@sa7sse) February 23, 2021
by Peter N. M. Hansteen (noreply@blogger.com) atFebruary 23, 2021 11:40 AM
Etter intenst arbeid over mange måneder er endelig den norske utgaven av «Hvordan knuse overvåkningskapitalismen» av Cory Doctorow ferdig og klar til å glede millioner av lesere over hele verden. Følgende pressemelding ble nettopp sendt ut til norske redaksjoner:
Hva gjør stordata med oss, og hvordan gjør algoritmene «fake news» til realiter?
Nå foreligger en viktig bok om temaet også på norsk. Boken klargjør og foreslår hvordan vi selv som enkeltpersoner, men også nasjonalt og internasjonalt kan bekjempe stordatakonsentrasjonene; «overvåkingskapitalismen». Boken er «Hvordan knuse overvåkingskapitalismen» av dr. Cory Doctorow. Den engelske bokutgivelsen kom for noen dager siden og lanseres med et Webinar torsdag 2021-01-28. Doctorow besøkte Norge og NUUG i desember med sin presentasjon Monopoly, Not Mind Control: What's Really Happening With "Surveillance Capitalism".
I funn etter funn, eksempel etter eksempel, gjennomgår og analyserer dr. Doctorow de utfordringer vi møter i større og større omfang. Ikke bare i USA, men også her hjemme.
Cory Doctorow er en britisk-kanadisk forfatter, journalist og aktivist, kjent for sine science fiction-romaner, for arbeidet for Creative Commons-bevegelsen, og for sine bidrag til reform av opphavsretten. Han er både æresdoktor og gjesteforeleser i datavitenskap ved Open University i UK, konsulent for Electronic Frontier Foundation, og godt kjent for innsiktsfullt å kommentere og skrive om digital utvikling.
Boken lanseres nå på norsk, både som ebok og på papir, oversatt av en dugnadsgjeng ledet av Petter Reinholdtsen.
Boken reiser noen helt grunnleggende og samfunnskritiske spørsmål: Hva fører det til når store deler av Internettet domineres av få store aktører og deres styringsverktøy og algoritmer?
Som individer bør vi være opptatt at grenser blir satt og håndhevet - grenser for overvåkning av individet, for utøvelse av kommersiell og politisk påvirkning, og for monopoldannelser i dataverdenen. Slik grensesetting styrker personvernet.
Konkurransetilsynet har ansvaret for at konkurranselovens § 11 skal forby «et dominerende foretak for utilbørlig å utnytte og misbruke sin dominerende stilling». Et tilsvarende forbud omfattes også av EØS-avtalens artikkel 54. Boken går i detalj om serien av innskrenkninger vi møter i valgfriheten, innskrenkninger som denne lovgivningen nettopp skal forhindre. Håndhevelse av en slik lovgivning er også til fordel for mindre næringsdrivende som uten dette får begrenset sine faktiske eller potensielle muligheter for vekst og etablering. «Slik atferd kan utgjøre et misbruk og kan ta ulike former», skriver Konkurransetilsynet.
Cory Doctorow går i sin bok lengre enn det med sine mange eksempler på forhold det burde vært grepet inn mot.
«Boken bør bidra til et sterkere engasjemen fra voktere av Internettet nasjonalt og internasjonalt - EU medregnet» sier oversetter Ole-Erik Yrvin og fortsetter: «Vi har derfor allerede tatt opp bokens forslag direkte med Distrikts- og digitaliseringsminister Linda Hofstad Helleland (H) og Konkurransetilsynet slik at de kan følges opp.»
«Også Norge bør innta en pådriverrolle i denne utviklingen», sier Petter Reinholdtsen. «Tiden er knapp, og tilsynsmyndighetene må få de verktøy og de ressurser de trenger for at vi her hjemme skal oppnå nødvendige resultater. Dette gjelder ikke bare vår egen generasjon; det gjelder alle generasjoner fremover», avslutter Petter Reinholdsen.
Kontaktinformasjon:
- Ole-Erik Yrvin, oeyrvin (at) gmail.com, +47 46500450
- Petter Reinholdtsen, pere (at) hungry.com
Relevante lenker:
- «Hvordan knuse overvåkingskapitalismen» kan bestilles på papir, som ebok eller leses på nett via http://www.hungry.com/~pere/publisher/.
- Opptak av NUUG-møtet Monopoly, Not Mind Control: What's Really Happening With "Surveillance Capitalism" med Cory Doctorow, https://www.nuug.no/aktiviteter/20201208-doctorow/.
- Påmelding til webinar som lanserer den engelske utgaven kan gjøres via https://craphound.com/category/destroy/.
- Cory Doctorows nettsted er https://craphound.com/.
Som vanlig, hvis du bruker Bitcoin og ønsker å vise din støtte til det jeg driver med, setter jeg pris på om du sender Bitcoin-donasjoner til min adresse 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b. Merk, betaling med bitcoin er ikke anonymt. :)
GNOME Internet Radio Locator 3.4.0 features updated language translations, new, improved map marker palette and now as well as C-SPAN from United States Supreme Court, Congress and Senate, also includes streaming radio from Washington, United States of America; WAMU/NPR, London, United Kingdom; BBC World Service, Berlin, Germany; Radio Eins, Norway; NRK, and Paris, France; France Inter/Info/Culture, as well as 119 other radio stations from around the world with live audio streaming implemented through GStreamer. The project lives on www.gnomeradio.org and Fedora 32 RPM packages for version 3.4.0 of GNOME Internet Radio Locator are now also available:
gnome-internet-radio-locator.spec
gnome-internet-radio-locator-3.4.0-1.fc32.src.rpm
gnome-internet-radio-locator-3.4.0-1.fc32.x86_64.rpm
To install GNOME Internet Radio Locator 3.4.0 on Fedora Core 32 in Terminal:
by oleaamot atOctober 01, 2020 12:00 PM
Since our family relies on the servers in our basement for email, music, movies, and so on it's been bothering me that we have no extra hardware in case something goes wrong.
I've been given two old Dell machines, I want to use one for stuff and one as spares in case something breaks.
Both needed reinstalling: I like Ubuntu and made myself a memory stick with Ubutnus usb-creator software. The machines never managed to boot of those. The Ubuntu server ISOs are too large to fit on a CD. Found ubuntus netboot ISOs for 18.04 which fits a CD ten times over. Again they didn't boot from the memory stick nor off a CD with various error messages or just passing it by and booting off the old OS on the harddrive.
After a while I recalled "unetbootin" - a old, but still updated - tool to make memory sticks from ISO images. It currently lives here: https://unetbootin.github.io/
I had to repartition my memory stick (cfdisk /dev/sdc in my case) and make a msdos filesystem (mkfs.msdos /dev/sdc1) and mount it.
Then unetbootin could make it bootable and indeed my quite old hardware was able to see my memory stick as a hard drive and boot for it with no further issues.
Win of the day.
by nicolai (noreply@blogger.com) atSeptember 27, 2020 11:14 AM
wtf, zsh
% uname -sr FreeBSD 12.1-RELEASE-p10 % for sh in sh csh bash zsh ; do printf "%-8s" $sh ; $sh -c 'echo \\x21' ; done sh \x21 csh \x21 bash \x21 zsh ! % cowsay wtf, zsh __________ < wtf, zsh > ---------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
I mean. Bruh. I know it’s intentional & documented & can be turned off, but every other shell defaults to POSIX semantics…
BTW:
% ln -s =zsh /tmp/sh % /tmp/sh -c 'echo \x21' \x21
by Dag-Erling Smørgrav atSeptember 22, 2020 01:11 PM
I’ve been messing around with Linux auditing lately, because of reasons, and ended up having to replicate most of libaudit, because of other reasons, and in the process I found bugs in both the kernel and userspace parts of the Linux audit subsystem.
Let us start with what Netlink is, for readers who aren’t very familiar with Linux: it is a mechanism for communicating directly with kernel subsystems using the BSD socket API, rather than by opening device nodes or files in a synthetic filesystem such as /proc
. It has pros and cons, but mostly pros, especially as a replacement for ioctl(2)
, since Netlink sockets are buffered, can be poll(2)
ed, and can more easily accommodate variable-length messages and partial reads.
Note: all links to source code in this post point to the versions used in Ubuntu 18.04 as of 2020-08-21: kernel 5.4, userspace 2.8.2.
Netlink messages start with a 16-byte header which looks like this: (source, man page)
struct nlmsghdr {
__u32 nlmsg_len; /* Length of message including header */
__u16 nlmsg_type; /* Message content */
__u16 nlmsg_flags; /* Additional flags */
__u32 nlmsg_seq; /* Sequence number */
__u32 nlmsg_pid; /* Sending process port ID */
};
The same header also provides a few macros to help populate and interpret Netlink messages: (source, man page)
#define NLMSG_ALIGNTO 4U
#define NLMSG_ALIGN(len) ( ((len)+NLMSG_ALIGNTO-1) & ~(NLMSG_ALIGNTO-1) )
#define NLMSG_HDRLEN ((int) NLMSG_ALIGN(sizeof(struct nlmsghdr)))
#define NLMSG_LENGTH(len) ((len) + NLMSG_HDRLEN)
#define NLMSG_SPACE(len) NLMSG_ALIGN(NLMSG_LENGTH(len))
#define NLMSG_DATA(nlh) ((void*)(((char*)nlh) + NLMSG_LENGTH(0)))
#define NLMSG_NEXT(nlh,len) ((len) -= NLMSG_ALIGN((nlh)->nlmsg_len), \
(struct nlmsghdr*)(((char*)(nlh)) + NLMSG_ALIGN((nlh)->nlmsg_len)))
#define NLMSG_OK(nlh,len) ((len) >= (int)sizeof(struct nlmsghdr) && \
(nlh)->nlmsg_len >= sizeof(struct nlmsghdr) && \
(nlh)->nlmsg_len <= (len))
#define NLMSG_PAYLOAD(nlh,len) ((nlh)->nlmsg_len - NLMSG_SPACE((len)))
Going by these definitions and the documentation, it is clear that the length field of the message header reflects the total length of the message, header included. What is somewhat less clear is that Netlink messages are supposed to be padded out to a multiple of four bytes before transmission or storage.
The Linux audit subsystem not only breaks these rules, but does not even agree with itself on precisely how to break them.
The userspace tools (auditctl(8)
, auditd(8)
…) all use libaudit to communicate with the kernel audit subsystem. When passing a message of length size
to the kernel, libaudit copies the payload into a large pre-zeroed buffer, sets the type, flags, and sequence number fields to the appropriate values, sets the pid field to zero (which is probably a bad idea but, strictly speaking, permitted), and finally sets the length field to NLMSG_SPACE(size)
, which evaluates to sizeof(struct nlmsghdr) +
size
rounded up to a multiple of four. It then writes that exact number of bytes to the socket.
Bug #1: The length field should not be rounded up; the purpose of the NLMSG_SPACE()
and NLMSG_NEXT()
macros is to ensure proper alignment of subsequent message headers when multiple messages are stored or transmitted consecutively. The length field should be computed using NLMSG_LENGTH()
, which simply adds the length of the header to its single argument.
Note: to my understanding, Netlink supports sending multiple messages in a single send / receive provided that they are correctly aligned, that they all have the NLM_F_MULTI
flag set, and that the last message in the sequence is a zero-length message of type NLMSG_DONE
. The audit subsystem does not use this feature.
Moving on: NETLINK_AUDIT
messages essentially fall into one of four categories:
AUDIT_GET
message which requests the current status of the audit subsystem, an AUDIT_SET
message which changes parameters, or an AUDIT_LIST_RULES
message which requests a list of currently active auditing rules.AUDIT_GET
request with a message of the same type containing a struct audit_status
, and to an AUDIT_LIST_RULES
request with a sequence of messages of the same type each containing a single struct audit_rule_data
.NLMSG_ERROR
in response to an invalid request (or a valid request with the NLM_F_ACK
flag set), or NLMSG_DONE
at the end of a multi-part response.audit(timestamp:serial):�
which uniquely identifies the event, followed by a space-separated list of key-value pairs. The final message has the type AUDIT_EOE
and has the same header, trailing space included, but no data.The kernel pads responses, errors and acknowledgements, but does not include that padding in the length reported in the message header. So far, so good. However…
Bug #2: Audit data messages are sent from the kernel without padding.
This is not critical, but it does mean that an implementation that batches up incoming messages and stores them consecutively must take extra care to keep them properly aligned.
Bug #3: The length field on audit data messages does not include the length of the header.
This is jaw-dropping. It is so fundamentally wrong. It means that anyone who wants to talk to the audit subsystem using their own code instead of libaudit will have to add a workaround to the Netlink layer of their stack to either fix or ignore the error, and apply that workaround only for certain message types.
How has this gone unnoticed? Well, libaudit doesn’t do much input validation. It relies on the NLMSG_OK()
macro, which checks only three things:
recvfrom(2)
, for instance) is no less than the length of a Netlink message header.Since every audit data message, even the empty AUDIT_EOE
message, begins with a timestamp and serial number, the length of the payload is never less than 25-30 bytes, and NLMSG_OK()
is always satisfied. And since the audit subsystem never sends multiple messages in a single send / receive, it does not matter that NLMSG_NEXT()
will be off by 16 bytes.
Consumers of libaudit don’t notice either because they never look at the header; libaudit wraps the message in its own struct audit_reply
with its own length and type fields and pointers of the appropriate types for messages that contain binary data (this is a bad idea for entirely different reasons which we won’t go into here). The only case in which the caller needs to know the length of the message is for audit events, when the length field just happens to be the length of the payload, just like the caller expects.
The odds of these bugs getting fixed is approximately zero, because existing applications will break in interesting ways if the kernel starts setting the length field correctly.
Turing wept.
THIS IS WHY WE CAN’T HAVE NICE THINGS
by Dag-Erling Smørgrav atAugust 21, 2020 04:33 PM
Today I released GNOME Gingerblue version 0.2.0 with the basic new features:
<Name> - <Song> - <ISO 8601 timestamp>.ogg
G_USER_DIRECTORY_MUSIC
($HOME/Music/)G_USER_DIRECTORY_MUSIC
($HOME/Music/)I began work on GNOME Gingerblue on July 4th, 2018, two years ago and I am going to spend the next four years to complete it for GNOME 4.
GNOME Gingerblue will be a Free Software program for musicians who would compose, record and share original music to the Internet from the GNOME Desktop.
The project isn’t yet ready for distribution with GNOME 3 and the GUI and features such as meta tagging and Internet uploads must be implemented.
The GNOME release team complained at the early release cycle in July and call the project empty, but I estimate it will take at least 4 years to complete 4.0.0 in reasonable time for GNOME 4 to be released between 2020 and 2026.
The Internet community can’t have Free Music without Free Recording Software for GNOME, but GNOME 4 isn’t built in 1 day.
I am trying to get gtk_record_button_new() into GTK+ 4.0.
I hope to work more on the first major release of GNOME Gingerblue during Christmas 2020 and perhaps get meta tags working as a new feature in 1.0.0.
Meanwhile you can visit the GNOME Gingerblue project domain www.gingerblue.org with the GNOME wiki page, test the initial GNOME Gingerblue 0.2.0 release that writes and records Song files from the microphone in $HOME/Music/ with Wizard GUI and XML parsing from August 2018, or spend money on physical goods such as the Norsk Kombucha GingerBlue soda or the Ngs Ginger Blue 15.6″ laptop bag.
by oleaamot atJuly 29, 2020 06:00 PM
Det finnes millioner av bøker der vernetiden er utløpt. Noen av dem er norske bøker, og endel av dem finnes ikke tilgjengelig digitalt. For å forsøke å gjøre noe med det siste, har NUUG vedtatt å få bygget en bokskanner. Utformingen er basert på en enkel variant i plast (byggeinstrukser), men vil bli laget i aluminium for lengre levetid.
Oppdraget med å bygge scanneren er gitt til våre venner i Oslo Sveisemek, som er godt igang med arbeidet. Her ser du en skisse over konstruksjonen:
Grunnrammen er montert, men det gjenstår fortsatt en god del:
Tanken er at medlemmer og andre skal kunne låne eller leie bokskanner ved behov, og de av oss som er interessert kan gå igang med å digitalisere bøker med OCR og pågangsmot. Ta kontakt med aktive (at) nuug.no hvis dette er noe for deg, eller stikk innom #nuug.
(Fotograf er Jonny Birkelund)
Mandag 27. januar 2019 kl. 08:30-11:00 arrangerer OsloMet og NUUG en frokostseminar om Noark 5 tjenestegrensesnitt. Vi opplever at det er en del misforståelser rundt tjenestegrensesnittet og vi ønsker med dette å rydde opp i disse og sette fokus på viktigheten med standardisering.
Arkivene må ta sin plass i et datadrevet verden og standardisering og metadata er mer viktig nå enn noensinne. Ønsker du vite mer om hvordan standardisert dokumentasjonsforvaltning kan hjelpe deg unngå leverandørinnlåsing? Ønsker du å unngå opprettelsen av nye digitale siloer? Ønsker du på sikt å redusere arkiveringskostnadene? Bli med og finn ut mer hva et standardisert fremtidsrettet dokumentasjonsforvaltnings-API kan gjøre for deg.
Det er gratis å delta (frokost er på huset), men begrenset med plasser. Seminaret strømmes på nettet og opptak legges ut i etterkant.
Mer info og påmelding finnes på NUUGs arrangementsside.
by nicolai (noreply@blogger.com) atDecember 08, 2019 09:51 PM
Hvis vi hadde laget et program som oversatte fra norsk til samisk, ville resultatet ha vært en samisk som er minst like dårlig som den norsken vi er i stand til å lage nå. Norsk og samisk er grammatisk sett svært ulike, og det er vanskelig å få til god samisk på grunnlag av norsk. Et slikt program vil føre til publisering av en hel masse svært dårlig samisk. En situasjon der mesteparten av all samisk publisert på internett kommer fra våre program fortoner seg som et mareritt. Det ville rett og slett ha ødelagt den samiske skriftkulturen.
Sjå kronikken: https://www.nordnorskdebatt.no/samisk-sprak/digitalisering/facebook/kan-samisk-brukes-i-det-offentlige-rom/o/5-124-48030
by unhammer atMay 31, 2018 09:00 AM
Following up on the CentOS 7 root filesystem on tmpfs post, here comes a guide on how to run a ZFS enabled CentOS 7 NAS server (with the operating system) from tmpfs.
The disk image is built in macOS using Packer and VirtualBox. Virtualbox is installed using the appropriate platform package that is downloaded from their website, and Packer is installed using brew:
$ brew install packer
Three files are needed in order to build the disk image; a Packer template file, an Anaconda kickstart file and a shell script that is used to configure the disk image after installation. The following files can be used as examples:
template.json
(Packer template example file)ks.cfg
(Anaconda kickstart example file)provision.sh
(Provision shell script example file)Create some directories:
$ mkdir ~work/centos-7-zfs/
$ mkdir ~work/centos-7-zfs/http/
$ mkdir ~work/centos-7-zfs/scripts/
Copy the files to these directories:
$ cp template.json ~work/centos-7-zfs/
$ cp ks.cfg ~work/centos-7-zfs/http/
$ cp provision.sh ~work/centos-7-zfs/scripts/
Modify each of the files to fit your environment.
Start the build process using Packer:
$ cd ~work/centos-7-zfs/
$ packer build template.json
This will download the CentOS 7 ISO file, start an HTTP server to serve the kickstart file and start a virtual machine using Virtualbox:
The virtual machine will boot into Anaconda and run through the installation process as specified in the kickstart file:
When the installation process is complete, the disk image will be available in the output-virtualbox-iso
folder with the vmdk
extension.
The disk image is now ready to be put in initramfs.
This section is quite similar to the previous blog post CentOS 7 root filesystem on tmpfs but with minor differences. For simplicity reasons it is executed on a host running CentOS 7.
Create the build directories:
$ mkdir /work
$ mkdir /work/newroot
$ mkdir /work/result
Export the files from the disk image to one of the directories we created earlier:
$ export LIBGUESTFS_BACKEND=direct
$ guestfish --ro -a packer-virtualbox-iso-1508790384-disk001.vmdk -i copy-out / /work/newroot/
Modify /etc/fstab
:
$ cat > /work/newroot/etc/fstab << EOF
tmpfs / tmpfs defaults,noatime 0 0
none /dev devtmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
EOF
Disable selinux:
echo "SELINUX=disabled" > /work/newroot/etc/selinux/config
Disable clearing the screen on login failure to make it possible to read any error messages:
mkdir /work/newroot/etc/systemd/system/getty@.service.d
cat > /work/newroot/etc/systemd/system/getty@.service.d/noclear.conf << EOF
[Service]
TTYVTDisallocate=no
EOF
Now jump to the Initramfs and Result sections in the CentOS 7 root filesystem on tmpfs and follow those steps until the end when the result is a vmlinuz
and initramfs
file.
The first time the NAS server boots on the disk image, the ZFS storage pool and volumes will have to be configured. Refer to the ZFS documentation for information on how to do this, and use the following command only as guidelines.
Create the storage pool:
$ sudo zpool create data mirror sda sdb mirror sdc sdd
Create the volumes:
$ sudo zfs create data/documents
$ sudo zfs create data/games
$ sudo zfs create data/movies
$ sudo zfs create data/music
$ sudo zfs create data/pictures
$ sudo zfs create data/upload
Share some volumes using NFS:
zfs set sharenfs=on data/documents
zfs set sharenfs=on data/games
zfs set sharenfs=on data/music
zfs set sharenfs=on data/pictures
Print the storage pool status:
$ sudo zpool status
pool: data
state: ONLINE
scan: scrub repaired 0B in 20h22m with 0 errors on Sun Oct 1 21:04:14 2017
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdd ONLINE 0 0 0
sdc ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errors
Mimes brønn er en nettjeneste som hjelper deg med å be om innsyn i offentlig forvaltning i tråd med offentleglova og miljøinformasjonsloven. Tjenesten har et offentlig tilgjengelig arkiv over alle svar som er kommet på innsynsforespørsler, slik at det offentlige kan slippe å svare på de samme innsynshenvendelsene gang på gang. Du finner tjenesten på
I følge gammel nordisk mytologi voktes kunnskapens kilde av Mime og ligger under en av røttene til verdenstreet Yggdrasil. Å drikke av vannet i Mimes brønn ga så verdifull kunnskap og visdom at den unge guden Odin var villig til å gi et øye i pant og bli enøyd for å få lov til å drikke av den.
Nettstedet vedlikeholdes av foreningen NUUG og er spesielt godt egnet for politisk interesserte personer, organisasjoner og journalister. Tjenesten er basert på den britiske søstertjenesten WhatDoTheyKnow.com, som allerede har gitt innsyn som har resultert i dokumentarer og utallige presseoppslag. I følge mySociety for noen år siden gikk ca 20 % av innsynshenvendelsene til sentrale myndigheter via WhatDoTheyKnow. Vi i NUUG håper NUUGs tjeneste Mimes brønn kan være like nyttig for innbyggerne i Norge.
I helgen ble tjenesten oppdatert med mye ny funksjonalitet. Den nye utgaven fungerer bedre på små skjermer, og viser nå leveringsstatus for henvendelsene slik at innsender enklere kan sjekke at mottakers epostsystem har bekreftet mottak av innsynshenvendelsen. Tjenesten er satt opp av frivillige i foreningen NUUG på dugnad, og ble lansert sommeren 2015. Siden den gang har 121 brukere sendt inn mer enn 280 henvendelser om alt fra bryllupsutleie av Operaen og forhandlinger om bruk av Norges topp-DNS-domene .bv til journalføring av søknader om bostøtte, og nettstedet er en liten skattekiste av interessant og nyttig informasjon. NUUG har knyttet til seg jurister som kan bistå med å klage på manglende innsyn eller sviktende saksbehandling.
– «NUUGs Mimes brønn var uvurderlig da vi lyktes med å sikre at DNS-toppdomenet .bv fortsatt er på norske hender,» forteller Håkon Wium Lie.
Tjenesten dokumenterer svært sprikende praksis i håndtering av innsynshenvendelser, både når det gjelder responstid og innhold i svarene. De aller fleste håndteres raskt og korrekt, men det er i flere tilfeller gitt innsyn i dokumenter der ansvarlig etat i ettertid ønsker å trekke innsynet tilbake, og det er gitt innsyn der sladdingen har vært utført på en måte som ikke skjuler informasjonen som skal sladdes.
– «Offentlighetsloven er en bærebjelke for vårt demokrati. Den bryr seg ikke med hvem som ber om innsyn, eller hvorfor. Prosjektet Mimes brønn innebærer en materialisering av dette prinsippet, der hvem som helst kan be om innsyn og klage på avslag, og hvor dokumentasjon gjøres offentlig. Dette gjør Mimes Brønn til et av de mest spennende åpenhetsprosjektene jeg har sett i nyere tid.» forteller mannen som fikk åpnet opp eierskapsregisteret til skatteetaten, Vegard Venli.
Vi i foreningen NUUG håper Mimes brønn kan være et nyttig verktøy for å holde vårt demokrati ved like.
by Mimes Brønn atFebruary 13, 2017 02:07 PM
Several years ago I wrote a series of posts on how to run EL6 with its root filesystem on tmpfs. This post is a continuation of that series, and explains step by step how to run CentOS 7 with its root filesystem in memory. It should apply to RHEL, Ubuntu, Debian and other Linux distributions as well. The post is a bit terse to focus on the concept, and several of the steps have potential for improvements.
The following is a screen recording from a host running CentOS 7 in tmpfs:
A build host is needed to prepare the image to boot from. The build host should run CentOS 7 x86_64, and have the following packages installed:
yum install libvirt libguestfs-tools guestfish
Make sure the libvirt daemon is running:
systemctl start libvirtd
Create some directories that will be used later, however feel free to relocate these to somewhere else:
mkdir -p /work/initramfs/bin
mkdir -p /work/newroot
mkdir -p /work/result
For simplicity reasons we’ll fetch our rootfs from a pre-built disk image, but it is possible to build a custom disk image using virt-manager. I expect that most people would like to create their own disk image from scratch, but this is outside the scope of this post.
Use virt-builder
to download a pre-built CentOS 7.3 disk image and set the root password:
virt-builder centos-7.3 -o /work/disk.img --root-password password:changeme
Export the files from the disk image to one of the directories we created earlier:
guestfish --ro -a /work/disk.img -i copy-out / /work/newroot/
Clear fstab since it contains mount entries that no longer apply:
echo > /work/newroot/etc/fstab
SELinux will complain about incorrect disk label at boot, so let’s just disable it right away. Production environments should have SELinux enabled.
echo "SELINUX=disabled" > /work/newroot/etc/selinux/config
Disable clearing the screen on login failure to make it possible to read any error messages:
mkdir /work/newroot/etc/systemd/system/getty@.service.d
cat > /work/newroot/etc/systemd/system/getty@.service.d/noclear.conf << EOF
[Service]
TTYVTDisallocate=no
EOF
We’ll create our custom initramfs from scratch. The boot procedure will be, simply put:
/init
(in the initramfs).tmpfs
mount point.tmpfs
mount point.switch_root
to boot on the CentOS 7 root filesystem.The initramfs will be based on BusyBox. Download a pre-built binary or compile it from source, put the binary in the initramfs/bin
directory. In this post I’ll just download a pre-built binary:
wget -O /work/initramfs/bin/busybox https://www.busybox.net/downloads/binaries/1.26.1-defconfig-multiarch/busybox-x86_64
Make sure that busybox
has the execute bit set:
chmod +x /work/initramfs/bin/busybox
Create the file /work/initramfs/init
with the following contents:
#!/bin/busybox sh
# Dump to sh if something fails
error() {
echo "Jumping into the shell..."
setsid cttyhack sh
}
# Populate /bin with binaries from busybox
/bin/busybox --install /bin
mkdir -p /proc
mount -t proc proc /proc
mkdir -p /sys
mount -t sysfs sysfs /sys
mkdir -p /sys/dev
mkdir -p /var/run
mkdir -p /dev
mkdir -p /dev/pts
mount -t devpts devpts /dev/pts
# Populate /dev
echo /bin/mdev > /proc/sys/kernel/hotplug
mdev -s
mkdir -p /newroot
mount -t tmpfs -o size=1500m tmpfs /newroot || error
echo "Extracting rootfs... "
xz -d -c -f rootfs.tar.xz | tar -x -f - -C /newroot || error
mount --move /sys /newroot/sys
mount --move /proc /newroot/proc
mount --move /dev /newroot/dev
exec switch_root /newroot /sbin/init || error
Make sure it is executable:
chmod +x /work/initramfs/init
Create the root filesystem archive using tar
. The following command also uses xz compression to reduce the final size of the archive (from approximately 1 GB to 270 MB):
cd /work/newroot
tar cJf /work/initramfs/rootfs.tar.xz .
Create initramfs.gz
using:
cd /work/initramfs
find . -print0 | cpio --null -ov --format=newc | gzip -9 > /work/result/initramfs.gz
Copy the kernel directly from the root filesystem using:
cp /work/newroot/boot/vmlinuz-*x86_64 /work/result/vmlinuz
The /work/result
directory now contains two files with file sizes similar to the following:
ls -lh /work/result/
total 277M
-rw-r--r-- 1 root root 272M Jan 6 23:42 initramfs.gz
-rwxr-xr-x 1 root root 5.2M Jan 6 23:42 vmlinuz
These files can be loaded directly in GRUB from disk, or using iPXE over HTTP using a script similar to:
#!ipxe
kernel http://example.com/vmlinuz
initrd http://example.com/initramfs.gz
boot
Mimes brønn har nå vært oppe i rundt et år. Derfor vi tenkte det kunne være interessant å få en kortfattet statistikk om hvordan tjenesten er blitt brukt.
I begynnelsen av juli 2016 hadde Mimes brønn 71 registrerte brukere som hadde sendt ut 120 innsynshenvendelser, hvorav 62 (52%) var vellykkede, 19 (16%) delvis vellykket, 14 (12%) avslått, 10 (8%) fikk svar at organet ikke hadde informasjonen, og 12 henvendelser (10%; 6 fra 2016, 6 fra 2015) fortsatt var ubesvarte. Et fåtall (3) av hendvendelsene kunne ikke kategoriseres. Vi ser derfor at rundt to tredjedeler av henvendelsene var vellykkede, helt eller delvis. Det er bra!
Tiden det tar før organet først sender svar varierer mye, fra samme dag (noen henvendelser sendt til Utlendingsnemnda, Statens vegvesen, Økokrim, Mediatilsynet, Datatilsynet, Brønnøysundregistrene), opp til 6 måneder (Ballangen kommune) eller lenger (Stortinget, Olje- og energidepartementet, Justis- og beredskapsdepartementet, UDI – Utlendingsdirektoratet, og SSB har mottatt innsynshenvendelser som fortsatt er ubesvarte). Gjennomsnittstiden her var et par uker (med unntak av de 12 tilfellene der det ikke har kommet noe svar). Det følger av offentlighetsloven § 29 første ledd at henvendelser om innsyn i forvaltningens dokumenter skal besvares «uten ugrunnet opphold», noe som ifølge Sivilombudsmannen i de fleste tilfeller skal fortolkes som «samme dag eller i alle fall i løpet av 1-3 virkedager». Så her er det rom for forbedring.
Klageretten (offentleglova § 32) ble benyttet i 20 av innsynshenvendelsene. I de fleste (15; 75%) av tilfellene førte klagen til at henvendelsen ble vellykket. Gjennomsnittstiden for å få svar på klagen var en måned (med unntak av 2 tillfeller, klager sendt til Statens vegvesen og Ruter AS, der det ikke har kommet noe svar). Det er vel verdt å klage, og helt gratis! Sivilombudsmannen har uttalt at 2-3 uker ligger over det som er akseptabel saksbehandlingstid for klager.
Flest henvendelser var blitt sendt til Utenriksdepartementet (9), tett etterfulgt av Fredrikstad kommune og Brønnøysundregistrene. I alt ble henvendelser sendt til 60 offentlige myndigheter, hvorav 27 ble tilsendt to eller flere. Det står over 3700 myndigheter i databasen til Mimes brønn. De fleste av dem har dermed til gode å motta en innsynshenvendelse via tjenesten.
Når vi ser på hva slags informasjon folk har bedt om, ser vi et bredt spekter av interesser; alt fra kommunens parkeringsplasser, reiseregninger der statens satser for overnatting er oversteget, korrespondanse om asylmottak og forhandlinger om toppdomenet .bv, til dokumenter om Myanmar.
Myndighetene gjør alle mulige slags ting. Noe av det gjøres dÃ¥rlig, noe gjør de bra. Jo mer vi finner ut om hvordan myndighetene fungerer, jo større mulighet har vi til Ã¥ foreslÃ¥ forbedringer pÃ¥ det som fungerer dÃ¥rlig… og applaudere det som bra. Er det noe du vil ha innsyn i, sÃ¥ er det bare Ã¥ klikke pÃ¥ https://www.mimesbronn.no/ og sÃ¥ er du i gang
by Mimes Brønn atJuly 15, 2016 03:56 PM
Twitter-brukaren @IngeborgSteine fekk nyleg ein del merksemd då ho tvitra eit bilete av nynorskutgåva av økonomieksamenen sin ved NTNU:
Dette var min økonomieksamen på "nynorsk". #nynorsk #noregsmållag #kvaialledagar https://t.co/RjCKSU2Fyg—
Ingeborg Steine (@IngeborgSteine) May 30, 2016
Kreative nyvinningar som *kvisleis og alle dialektformene og arkaismane ville vore usannsynlege å få i ei maskinomsett utgåve, så då lurte eg på kor mykje betre/verre det hadde blitt om eksaminatoren rett og slett hadde brukt Apertium i staden? Ingeborg Steine var så hjelpsam at ho la ut bokmålsutgåva, så då får me prøva
Ingen kvisleis og fritt for tær og fyr, men det er heller ikkje perfekt: Visse ord manglar frå ordbøkene og får dermed feil bøying, teller blir tolka som substantiv, ein anna maskin har feil bøying på førsteordet (det mangla ein regel der) og at blir ein stad tolka som adverb (som fører til det forunderlege fragmentet det verta at anteke tilvarande). I tillegg blir språket gjenkjent som tatarisk av nettsida, så det var kanskje litt tung norsk? Men desse feila er ikkje spesielt vanskelege å retta på – utviklingsutgåva av Apertium gir no:
Det er enno eit par småting som kunne vore retta, men det er allereie betre enn dei fleste eksamenane eg fekk utdelt ved UiO …
by unhammer atJune 01, 2016 09:45 AM
by Anders (noreply@blogger.com) atOctober 18, 2015 01:09 PM
One of the biggest takeaways from 31C3 and the most recent Snowden-leaked NSA documents is that a lot of SSH stuff is .. broken.
I’m not surprised, but then again I never am when it comes to this paranoia stuff. However, I do run a ton of SSH in production and know a lot of people that do. Are we all fucked? Well, almost, but not really.
Unfortunately most of what Stribika writes about the “Secure Secure Shell” doesn’t work for old production versions of SSH. The cliff notes for us real-world people, who will realistically be running SSH 5.9p1 for years is hidden in the bettercrypto.org repo.
Edit your /etc/ssh/sshd_config
:
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160
KexAlgorithms diffie-hellman-group-exchange-sha256
Basically the nice and forward secure aes-*-gcm chacha20-poly1305 ciphers, the curve25519-sha256 Kex algorithm and Encrypt-Then-MAC message authentication modes are not available to those of us stuck in the early 2000s. That’s right, provably NSA-proof stuff not supported. Upgrading at this point makes sense.
Still, we can harden SSH, so go into /etc/ssh/moduli and delete all the moduli that have 5th column < 2048, and disable ECDSA host keys:
cd /etc/ssh mkdir -p broken mv moduli ssh_host_dsa_key* ssh_host_ecdsa_key* ssh_host_key* broken awk '{ if ($5 > 2048){ print } }' broken/moduli > moduli # create broken links to force SSH not to regenerate broken keys ln -s ssh_host_ecdsa_key ssh_host_ecdsa_key ln -s ssh_host_dsa_key ssh_host_dsa_key ln -s ssh_host_key ssh_host_key
Your clients, which hopefully have more recent versions of SSH, could have the following settings in /etc/ssh/ssh_config
or .ssh/config
:
Host all-old-servers Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-ripemd160 KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Note: Sadly, the -ctr ciphers do not provide forward security and hmac-ripemd160 isn’t the strongest MAC. But if you disable these, there are plenty of places you won’t be able to connect to. Upgrade your servers to get rid of these poor auth methods!
There, done.
Updated Jan 6th to highlight the problems of not upgrading SSH.
Updated Jan 22nd to note CTR mode isn’t any worse.
Go learn about COMSEC if you didn’t get trolled by the title.
by kacper atJanuary 06, 2015 04:33 PM
Intermission..
Recently I been doing some video editing.. less editing than tweaking my system tho.
If you want your jack output to speak with Kdenlive, a most excellent video editing suite,
and output audio in a nice way without choppyness and popping, which I promise you is not nice,
you’ll want to pipe it through pulseaudio because the alsa to jack stuff doesn’t do well with phonom, at least not on this convoluted setup.
Remember, to get that setup to work, ALSA pipes to jack with the pcm.jack { type jack ..
thing, and remove the alsa to pulseaudio stupidity at /usr/share/alsa/alsa.conf.d/50-pulseaudio.conf
So, once that’s in place, it won’t play even though Pulse found your Jack because your clients are defaulting out on some ALSA device… this is when you change /etc/pulse/client.conf
and set default-sink = jack_out
.
by kacper atDecember 08, 2014 12:18 AM
Lyst på én? Den er ikke i salg i Norge enda, men du kan kjøpe den på Amazon. Les her hvordan jeg kjøpte min på Amazon (bla litt nedover på siden). Med norsk moms, levert til Rimi-butikken 100 meter fra der jeg bor, kom den på 1.850 kroner. Det er den så absolutt verdt:)
by Bjorn Venn atFebruary 24, 2013 07:34 PM
Den nye Chromebook-en til Google, Chromebook Pixel. Foreløbig kun i salg i USA og UK via Google Play og BestBuy.
Verden er urettferdig:)
by Bjorn Venn atFebruary 22, 2013 12:44 PM
$typedef = 'A8 A16 A16 L'; $sizeof = length pack($typedef, () ); while ( read(WTMP, $buffer, $sizeof) == $sizeof ) { ($line, $user, $host, $time) = unpack($typedef, $buffer); # Gjør hva du vil med disse verdiene her }FreeBSD bruker altså bare verdiene line (ut_line), user (ut_name), host (ut_host) og time (ut_time), jfr. utmp.h. Linux (x64, hvem bryr seg om 32-bit?) derimot, lagrer en hel del mer i wtmp-loggen, og etter en del Googling, prøving/feiling og kikking i bits/utmp.h kom jeg frem til:
$typedef = "s x2 i A32 A4 A32 A256 s2 l i2 i4 A20"; $sizeof = length pack($typedef, () ); while ( read(WTMP, $buffer, $sizeof) == $sizeof ) { ($type, $pid, $line, $id, $user, $host, $term, $exit, $session, $sec, $usec, $addr, $unused) = unpack($typedef, $buffer); # Gjør hva du vil med disse verdiene her }Som bare funker, flott altså. Da ser jeg i sanntid brukere som logger på og av, og kan ta handlinger basert på dette.
A complete feed is available in any of your favourite syndication formats linked by the buttons below.