For a number of years, those involved in networking things have all had one thing in common: A TODO item about either getting started implementing IPv6 in their networks, or at least getting started on planning to go dual-stack or even purely IPv6.
And as anybody who follows networking things is aware, IPv6 adoption in some parts of the world has been advancing hardly at all, even in jurisdictions where actual legislation is in place that mandates IPv6 use and leaves IPv4 support as an allowable option.
I have been involved in some capacity or other in enough of these efforts over the years that I have been pondering writing a piece about the slow rate of adoption, and of course I hoped to be able to write something witty or at least entertaining about something that is in fact, to some of us at least, a somewhat frustrating situation.
Then today I attended an internal presentation by a colleague that included some examples of ChatGPT use writing as well as programming.
The demonstration was enough fun that I thought I would see if the robot could offer some insights and perhaps get me started on writing the piece I had been pondering.
Signing up and getting access to a trial was painless and quick, so some things have improved since I first tried to access the service a few weeks ago when news about the service first hit.
The first chat went like this:
PNMH: managers and admins put off setting up IPv6 in their networks. write a short story on how and why they procrastinate and make excuses.
ChatGPT: Once upon a time, in a large corporation, there was a group of managers and administrators responsible for the network infrastructure. They had heard about IPv6, the new internet protocol, and knew that it was important to adopt it to future-proof the company's network. However, they kept putting off the task of implementing IPv6, and s why:
Firstly, they felt overwhelmed by the complexity of IPv6. They were used to IPv4, and the thought of transitioning to a new protocol seemed daunting. They convinced themselves that they needed more time to research and plan the transition thoroughly.
Secondly, they thought that their current IPv4 infrastructure was sufficient, and that there was no need to change anything. They believed that IPv6 adoption was only necessary for larger companies and service providers, not for their organization.
Thirdly, they were concerned about the cost of implementing IPv6. They felt that it would require a significant investment in new hardware, software, and training, and they were not sure if the benefits of IPv6 would justify the expense.
Finally, they were preoccupied with other pressing matters, such as meeting deadlines, managing budgets, and dealing with daily operational issues. IPv6 implementation was not a top priority for them, and they kept pushing it back on their to-do lists.
As time went by, the managers and administrators continued to procrastinate, making excuses and justifications for not implementing IPv6. They were oblivious to the fact that their competitors were already using IPv6 and gaining a competitive edge. They failed to realize that IPv6 adoption was not a choice but a necessity for staying relevant and competitive in the digital age.
In the end, the managers and administrators were forced to implement IPv6 when their IPv4 addresses ran out, causing network disruptions and downtime. They learned the hard way that procrastination and excuses only lead to costly delays and missed opportunities.
Also accesssible as pure text as chatgpt_on_ipv6_procrastination.txt
As one of my mastodon commenters said, it looks like this robot has peeked into too many IT Management meetings.
I would myself have been more worried about ensuring that services we run are accessible to IPv6-only networks, and for that matter that users in our own networks should be able to access services in IPv6-only parts of the Internet.
This shows that whatever they trained the system on seems to have included the most frequently cited reasons for not actually moving forward with IPv6 implemenation or other network modernization. But the thing had stylistic issues and I was looking for something fresher. So on to the next try.
Some stakeholders, such as Norwegian public sector entities, are under legal obligation to make their services available on IPv6 and to enable clients in their network to access IPv6 resources, the final deadline for that implementation in principle passed at the end of 2022.
So I tried stressing that point in my second try. Here is a record of hat session:
PNMH: why do network designers always postpone their IPv6 implementation even past legally required deadlines?
ChatGPT: There are a few reasons why network designers may postpone their IPv6 implementation, even past legally required deadlines:
Compatibility issues: IPv4 and IPv6 are not directly compatible with each other, which means that devices using IPv4 cannot communicate directly with devices using IPv6. This creates interoperability issues that can make it difficult for organizations to transition to IPv6 without causing disruptions to their existing networks.
Cost: Implementing IPv6 requires an investment in new hardware, software, and training. For some organizations, the cost of upgrading their infrastructure to support IPv6 may be prohibitive, especially if they have already invested heavily in their existing IPv4 infrastructure.
Lack of perceived need: Some organizations may not see the urgency in transitioning to IPv6, especially if they are not experiencing any immediate problems with their IPv4 networks. This can lead to a "wait and see" approach, where organizations postpone their IPv6 implementation until they are forced to do so by external factors.
Complexity: IPv6 is more complex than IPv4, which can make it more difficult for network designers to implement and manage. This complexity can also make it more challenging to troubleshoot network issues that may arise after the transition to IPv6.
Lack of IPv6 support from vendors: Some network hardware and software vendors may not yet fully support IPv6, which can make it difficult for organizations to transition to IPv6 without making significant changes to their existing infrastructure.
Overall, transitioning to IPv6 requires a significant investment of time and resources, and network designers may delay implementation for various reasons. However, as the pool of available IPv4 addresses continues to dwindle, and as more organizations begin to adopt IPv6, it is becoming increasingly important for organizations to make the transition to IPv6 in a timely manner.
Also accesssible as pure text as chatgpt_on_ipv6_procrastination_legal.txt
This was actually an improvement. The reachability point is made clearer here, and the system clearly picked up on the legal requirement part.
Whether IPv6 is in fact more complex than IPv4 is debatable, but the robot is right that this is at least a common perception, factual or otherwise.
A clear improvement, but when I publish something, I make an effort to make it witty or at least a bit snarky.
Time to tweak the query once again.
My third and final IPv6-themed query of the day ran like this:
PNMH: write a funny story about why IPv6 implementation is so hard and scary to plan for and actually do
ChatGPT: Once upon a time, there was a small company that had been using IPv4 for all their networking needs. One day, their IT manager decided that it was time to upgrade to IPv6.
Excited by the prospect of faster speeds and improved security, the IT manager decided to jump right in and start planning for the implementation of IPv6. However, as they began to research and plan, they soon discovered that implementing IPv6 was not as easy as they had thought.
First, they realized that they would need to upgrade all of their networking equipment to support IPv6. This was going to be expensive, but the IT manager was determined to make it happen. They went ahead and ordered all the necessary equipment and started scheduling the installation.
However, as the installation date approached, the IT manager began to feel a sense of dread. What if the new equipment didn't work with their existing systems? What if they couldn't get everything up and running in time?
Feeling overwhelmed, the IT manager decided to take a break and watch some TV. As luck would have it, they stumbled upon a horror movie about a group of people who get lost in a network of tunnels and are pursued by an army of rogue IPv6 addresses.
Suddenly, the IT manager's fear of implementing IPv6 reached new heights. They couldn't shake the feeling that something was going to go horribly wrong. They started imagining all sorts of terrifying scenarios: their network crashing, their data being compromised, even their computers coming to life and taking over the office!
In the end, the IT manager decided to put the IPv6 implementation on hold indefinitely. They just couldn't bring themselves to take the risk. And so, the small company continued to chug along on IPv4, while their competitors raced ahead with faster, more secure networks.
The moral of the story? Don't let fear hold you back from upgrading to IPv6. Sure, it can be scary and complicated, but with the right planning and support, you can make the transition smoothly and reap the benefits of a faster, more secure network. And who knows, maybe you'll even get a good horror story out of it!
Also accesssible as pure text as chatgpt_on_ipv6_procrastination_funny.txt
Ouch. The Once upon a time lead-in is back. But the rest is something that I might actually manage to editstomptwistcrunch into something presentable given enough coffee and time.
I have had worse material handed to me to edit by humans, and I may have done worse first drafts myself when writing when I should not have.
There are some logic issues in there, but nothing that could not be edited out or around.
By this time of the day, I had to go attend to a few things with deadlines, but I took the time to do a final query on a slightly different topic.
So after looking into a slightly painful topic, what better than to see what our robot friend has learned about our favorite operating system and poetry:
PNMH: write a poem about OpenBSD
ChatGPT: In a world of code and bytes,
Where hackers lurk in endless nights,
There shines a system, strong and bright,
A fortress of security, with OpenBSD in sight.
Like a lighthouse on a stormy shore,
It guards the network from cyber war,
With firewalls, cryptography, and more,
It stands tall, unyielding, forevermore.
In its heart, a philosophy,
Of code that's clean and free,
No backdoors, no secret keys,
Just trust and transparency.
From humble beginnings, it arose,
A gift to the world, from the chosen few,
A beacon of hope, for those who oppose,
The tyranny of those who seek to control.
So let us raise a toast, to OpenBSD,
A shining example of what can be,
When we dare to dream, and we dare to be free,
For a better tomorrow, for you and for me.
Also accesssible as pure text as chatgpt_poem_about_openbsd.txt
I just had to post that to openbsd-misc, and it will be interesting to see how the thread develops. And is a fitting end to what is, I think, the piece I have posted under my own name that contains the least material written by myself.
Good night and good luck.<
by Peter N. M. Hansteen (email@example.com) atMar 04, 2023 14:43
Thanks to the efforts of the OpenSnitch lead developer Gustavo Iñiguez Goya allowing me to sponsor the upload, the interactive application firewall OpenSnitch is now available in Debian Testing, soon to become the next stable release of Debian.
This is a package which set up a network firewall on one or more machines, which is controlled by a graphical user interface that will ask the user if a program should be allowed to connect to the local network or the Internet. If some background daemon is trying to dial home, it can be blocked from doing so with a simple mouse click, or by default simply by not doing anything when the GUI question dialog pop up. A list of all programs discovered using the network is provided in the GUI, giving the user an overview of how the machine(s) programs use the network.
OpenSnitch was uploaded for NEW processing about a month ago, and I had little hope of it getting accepted and shaping up in time for the package freeze, but the Debian ftpmasters proved to be amazingly quick at checking out the package and it was accepted into the archive about week after the first upload. It is now team maintained under the Go language team umbrella. A few fixes to the default setup is only in Sid, and should migrate to Testing/Bookworm in a week.
During testing I ran into an issue with Minecraft server broadcasts disappearing, which was quickly resolved by the developer with a patch and a proposed configuration change. I've been told this was caused by the Debian packages default use if /proc/ information to track down kernel status, instead of the newer eBPF module that can be used. The reason is simply that upstream and I have failed to find a way to build the eBPF modules for OpenSnitch without a complete configured Linux kernel source tree, which as far as we can tell is unavailable as a build dependency in Debian. We tried unsuccessfully so far to use the kernel-headers package. It would be great if someone could provide some clues how to build eBPF modules on build daemons in Debian, possibly without the full kernel source.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Linux desktop systems have standardized how programs present themselves to the desktop system. If a package include a .desktop file in /usr/share/applications/, Gnome, KDE, LXDE, Xfce and the other desktop environments will pick up the file and use its content to generate the menu of available programs in the system. A lesser known fact is that a package can also explain to the desktop system how to recognize the files created by the program in question, and use it to open these files on request, for example via a GUI file browser.
A while back I ran into a package that did not tell the desktop system how to recognize its files and was not used to open its files in the file browser and fixed it. In the process I wrote a simple debian/tests/ script to ensure the setup keep working. It might be useful for other packages too, to ensure any future version of the package keep handling its own files.
For this to work the file format need a useful MIME type that can be used to identify the format. If the file format do not yet have a MIME type, it should define one and preferably also register it with IANA to ensure the MIME type string is reserved.
The script uses the xdg-mime program from xdg-utils to query the database of standardized package information and ensure it return sensible values. It also need the location of an example file for xdg-mime to guess the format of.
#!/bin/sh # # Author: Petter Reinholdtsen # License: GPL v2 or later at your choice. # # Validate the MIME setup, making sure motor types have # application/vnd.openmotor+yaml associated with them and is connected # to the openmotor desktop file. retval=0 mimetype="application/vnd.openmotor+yaml" testfile="test/data/real/o3100/motor.ric" mydesktopfile="openmotor.desktop" filemime="$(xdg-mime query filetype "$testfile")" if [ "$mimetype" != "$filemime" ] ; then retval=1 echo "error: xdg-mime claim motor file MIME type is $filemine, not $mimetype" else echo "success: xdg-mime report correct mime type $mimetype for motor file" fi desktop=$(xdg-mime query default "$mimetype") if [ "$mydesktopfile" != "$desktop" ]; then retval=1 echo "error: xdg-mime claim motor file should be handled by $desktop, not $mydesktopfile" else echo "success: xdg-mime agree motor file should be handled by $mydesktopfile" fi exit $retval
It is a simple way to ensure your users are not very surprised when they try to open one of your file formats in their file browser.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
It finally happened. Today, I added the three hundred thousandth (yes, 300,000th) spamtrap address to my greytrapping setup, for the most part fished out of incoming traffic here, for spammers to consume.
A little more than fifteen years after I first published a note about the public spamtrap list for my greytrapping setup in a piece called Hey, spammer! Here's a list for you!, the total number of imaginary friends has now reached three hundred thousand. I suppose that is an anniversary of sorts.
If this all sounds a bit unfamiliar, you can find the a brief explanation of the data collected and the list itself on the traplist home page.
And yes, the whole thing has always been a bit absurd.
That said, at the time in the mid noughties this greytrapping setup was announced, we had been battling scammy spam email and malicious software that also abused email to spread for some years, and we were eagerly looking for new ways to combat the spam problem which tended to eat into time and resources we would rather have used on other things entirely.
With that backdrop, collecting made up or generated, invalid email addresses in our home domains from various logs as traps for spammers seemed like an excellent joke and a fun way to strike back at the undesirables who did their damnedest to flood our users' mailboxes.
The initial annoncement shows the early enthusiasm, as does a followup later in the same month, Harvesting the noise while it's still fresh; SPF found potentially useful. With a small helping of scepticism towards some of the other methods and ideas that circulated at the time, of course.
The various followups (search on the site using "spam, "antispam" or for that matter "spamd" and you will find quite a few) reveal that we went to work on collecting, feeding to spamdb and publishing with a grin for quite a while.
I even gave a talk at BSDCan 2007 about the experience up to that point around the time the traplist became public.
A few years later I posted a slightly revised version of that somewhat overweight paper as a blog post called Effective Spam and Malware Countermeasures - Network Noise Reduction Using Free Tools that has also grown some addenda and updates over the years.
I have revisited the themes of spam and maintaining blocklists generated from the traffic that hits our site a few times over the years.
The most useful entries are probably Maintaining A Publicly Available Blacklist - Mechanisms And Principles (April 2013) and In The Name Of Sane Email: Setting Up OpenBSD's spamd(8) With Secondary MXes In Play - A Full Recipe (May 2012), while the summary articles Badness, Enumerated by Robots (August 2018) and Goodness, Enumerated by Robots. Or, Handling Those Who Do Not Play Well With Greylisting offer some more detail on the life that includes maintaining blocklists and pass lists.
However, by the time the largest influx of new spamtraps, or imaginary friends if you will, happened during February through April of 2019 I was fresh out of ideas on how to write something entertaining and witty about the episode.
What happened was that the collection that at the time had accumulated somewhat more than fifty thousand entries, at a rate of no more than a few tens of entries per day for years, started swelling by several thousand a day, harvesting again from the greylist.
The flood went on for weeks, and forced me to introduce a bit more automation in the collecting process. I also I tried repeatedly to write about the sudden influx, but failed to come up with an interesting angle and put off writing that article again and again.
As I later noted in that year's only blog entry The Year 2019 in Review: This Was, Once Again, Weirder Than the Last One, starting January 30th 2019
"I noticed via my scriptery that reports on such things that a large number of apparent bounce message deliveries to addresses made up of "Western-firstname.Chinesefirstname.lastname@example.org", such as email@example.com or firstname.lastname@example.org, had turned up, in addition to a few other varieties with no dot in the middle, possibly indicating separate sources."
The IP addresses of the sending hosts were all in Chinese address ranges, and some weeks later, in April, we had ended up harvesting at least 120 000 unique new entries of a very similar kind before the volume went down rather abruptly to roughly what it had been before the indicent.
It is likely that what we were seeing was backscatter from one or more phishing campaigns targeting Chinese users where for reasons only known to the senders they had chosen addresses in our domains as faked sender addresses.
Fortunately by the time this incident occurred I had started keeping a log of spamtraps by date added and the actual greylist dumps generated by the blocklist generating script can be retrieved so more detailed data can be assembled when and if someone can find the time to do so.
As I have kept repeating over the years, maintaining the spamtrap list and the blocklists sometimes turns up bizarre phenomena. Among the things that keep getting added to the spamtraps list are the products of SMTP callbacks, and another source of new variants seems to be simply shoddy data handling at the sender end. We keep seeing things that more likely than not are oddly truncated versions of existing spamtraps.
And finally, while the number of trapped hosts at any time seems to have stabilized over the last couple of years at the mid to low four digits, we seem to be seeing that low number of hosts aggressively targeting existing spamtraps, as detailed in the February 2020 sextortion article.
I have at times been astonished by what appears to be taken as useful addresses to send mail to, and I am sure the collecting and blocking activity will turn up further absurdities unheard of going forward. It is also quite possible that I have forgotten about or skipped over one or more weird episodes in the saga of the spamtraps and blocklists. I hope to be able to deliver, at odd intervals, writeups that are interesting, useful, funny -- at least one and hopefully all.
If you are interested in the issues I touch on here or if the data I accumulate would be useful in your research, please let me know via comments or email.
And yes, since I I know you have been dying to ask, this is the entry, collected in the evening (CEST) of 7 September 2022, which took our population of imaginary friends over the 300 000 line:
Sep 7 19:52:18 skapet sshd: Failed password for invalid user ftpshared from 22.214.171.124 port 45876 ssh2
which by the obvious processing we do here from failed login attempt to offcial spamtrap becomes
Date Source Original Spamtrap 2022-09-07 SSH ftpshared email@example.com
and joins the collection as entry number 300,000 (three hunded thousand).
By the time you read this, the total is likely to have increased yet again.
On a relevant mailing list it was been suggested that if you run a large scale email service, our list of spamtraps could be useful in filtering outgoing mail. If a customer tries to contact one of our imaginary friends, you probably need to pay extra attention to that customer.
by Peter N. M. Hansteen (firstname.lastname@example.org) atJan 28, 2023 18:27
Consider saving the entire sources of https://download.gnome.org/sources/ by uploading the sources of the latest sources to a Network as a Service entity such as gnomevoice.org.
Printing source code saved GNOME 2.0 after the Red Hat, Inc.’s Power Failure in North Carolina during Winter 1999, as GNOME 1.0 is suddenly lost.
A worldwide power failure should be of our greatest concern at the moment.
Never put all of your eggs in the same basket, was the lesson learnt from open source domains such as sf.net, mozillathunderbird.org, and gphoto.fix.no.
We must also be prepared to save Project GNOME Voice like a Network as a Service.
Copyleft Solutions is the current Network as a Service host of gnomeradio.org and gnomevoice.org.
by oleaamot atDec 05, 2022 07:35
I work on Radio, Gingerblue and Voice, and previously I worked on gPhoto in the GNOME Project since November 1998.
While I have written, always as a non-profit, non-paid volunteer for the GNU and the GNOME project, Radio in 2002-2022, Gingerblue in 2018-2022 and Voice in 2022, and I posted org.gnome.Radio during GUADEC 2022 with criticism for posting it publicly from one significant member of the GNOME community, I have always stood up for common and core GNOME values since I took part at the discussion of the GNOME Foundation at ENST in Paris in March 2000. I joined GNOME in November 1998 (24 years ago) after co-launching and working on the GNU Photo project for digital still photography device support in GNOME in November 1998 that turned into gPhoto in 1999.
I have seen a gradual transition of GNOME services away from people.gnome.org since 2020 that I never spoke up on.
GNOME Foundation’s board of directors agreed to the gradual transition away from the mailing lists years ago, so I doubt they’ll suddenly change tack now. Even I’m familiar with the discussions and plans around this planned change, all though I wasn’t an active GNOME contributor between 2004-2014, I disagreed with the GNOME Foundation.
You can view the historic email archives on mail.gnome.org and the GNOME Foundation list at https://mail.gnome.org/archives/foundation-list/
Where will future GNOME Foundation discussions take place? Most likely on https://discourse.gnome.org.
My experience with this platform is vague. I am more familiar with mail.gnome.org. However, the voting of the GNOME Foundation’s board of directors stands.
mail.gnome.org is going stale after 25 1/2 years of service in the project.
Today I am announcing that I am leaving the GNOME Foundation after 25 years of service and will work further on the gnomeradio.org, gingerblue.org, and gnomevoice.org domains, as well as complete my thesis Public Voice Communication about the software Voice (gnome-voice) at NTNU before June 24th, 2024.
by oleaamot atDec 05, 2022 00:00
Kubernetes is a lot of things, some cool, some vexsome.
One of the things is that it does not necessarily make it easy to make backups of data stored in pods. And if the data is a database you can't really back it up from the outside in the data storage mount either since the backup is liable to become inconsistent and unusable. You have to deal with the database engine to get a consistent backup.
At work we have a self hosted kubernetes cluster and quite a bit og old fashioned infrastructure too. Lately some postgres databases have been deployed here with the bitnami helm chart.
We use automation tools to set up backups and all kinds of things. And in these tools we prefer not to put passwords if we can avoid it.
One _could_ make a backup using pg_dump or similar giving it the pod IP, username and password, but we'd like to avoid that.
Examining the bitnami postgres pod it was set up quite interestingly with postgres running at uid 1001 which does not have a user account associated. This is apparently to accomodate openshift. It also makes it quite hard to run psql inside the pod:
psql: local user with ID 1001 does not exist
There are additional things that complicate it. Studying the github issues for the helm chart I found that the makers of this had a workaround. After experimenting with kubectl I managed to construct a command that does not require us to put the database password into the backup script:
kubectl exec -n $NAMESPACE $PODNAME -- bash -c ". /opt/bitnami/scripts/libpostgresql.sh && postgresql_enable_nss_wrapper && PGPASSWORD=\$POSTGRES_PASSWORD pg_dump $OPTS -c -U postgres $DB"
The magic is in libpostgresql.sh and the postgresql_enable_nss_wrapper, which makes the user "postgres" defined for the commands that follow.
You have to supply the environment variables NAMESPACE, PODNAME, the optional OPTS for options and DB yourself. POSTGRES_PASSWORD is taken from the deployed pod.
by nicolai (email@example.com) atAug 04, 2022 11:49
Some years ago Ubuntu introduced snap and said it would be better. In my experience it was slower.
And then they started packaging chromium-browser as a SNAP only, this broke the kde-plasma and kde-connect (media and phone desktop integrations, and I resorted to installing chrome from Google. This was quite easy because Chrome comes as a .deb package which also installs a apt-source so it's upgraded just like the rest of the system.
This, by the way is the apt source for Chrome, you drop it in e.g. /etc/apt/sources.list.d/google-chrome.list:
deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main
And then you install the google signing key:
wget -qO- https://dl.google.com/linux/linux_signing_key.pub | sudo tee /etc/apt/trusted.gpg.d/google-linux-signing-key.asc
Then you can do 'apt update' and 'apt install google-chrome-stable'. See also https://www.google.com/linuxrepositories/ for further information
Lately I've been using Chrome less and less privately and Firefox more and more due to the privacy issues with Chrome.
In Ubuntu 22.04 they started providing Firefox as a snap. Again breaking desktop and phone integration, actually I didn't look very hard, it was just gone and I wanted it back. There are no good apt sources for Firefox provided by the Mozilla project. The closest I could find was Firefox provided by Debian.
Which turned out to work very well, but only thanks to the apt preference system.
You make two files: First /etc/apt/sources.list.d/bullseye.list:
deb http://ftp.no.debian.org/debian/ bullseye main
deb http://security.debian.org/debian-security bullseye-security main
deb http://ftp.no.debian.org/debian/ bullseye-updates main
Then put this in /etc/apt/preferences (I'm in norway, replace "no" with other contry code if you like):
Pin: origin "ftp.no.debian.org"
Pin: origin "security.debian.org"
Pin: release n=jammyPin-Priority: 950
Also you need to install debian repository signing keys for that:
wget -qO- https://ftp-master.debian.org/keys/archive-key-11.asc | sudo tee /etc/apt/trusted.gpg.d/bullseye.asc
wget -qO- https://ftp-master.debian.org/keys/archive-key-11-security.asc | sudo tee /etc/apt/trusted.gpg.d/bullseye-security.asc
Then you execute these two in turn:
apt updateapt install firefox-esr
And you should have firefox without getting any other things from Debian, the system will prefer Ubuntu 22.04 aka Jammy.
Big fat NOTE: This might complicate later release upgrades on your Ubuntu box. do-release-upgrade will disable your Chrome and Bullseye apt-sources, and quite possibly the preference file will be neutralized as well, but if not you might have to neutralize it yourself.
by nicolai (firstname.lastname@example.org) atMay 20, 2022 20:24
by Salve J. Nilsen atJul 10, 2020 16:25
by Salve J. Nilsen atJun 27, 2020 22:20
Hvis vi hadde laget et program som oversatte fra norsk til samisk, ville resultatet ha vært en samisk som er minst like dårlig som den norsken vi er i stand til å lage nå. Norsk og samisk er grammatisk sett svært ulike, og det er vanskelig å få til god samisk på grunnlag av norsk. Et slikt program vil føre til publisering av en hel masse svært dårlig samisk. En situasjon der mesteparten av all samisk publisert på internett kommer fra våre program fortoner seg som et mareritt. Det ville rett og slett ha ødelagt den samiske skriftkulturen.
Sjå kronikken: https://www.nordnorskdebatt.no/samisk-sprak/digitalisering/facebook/kan-samisk-brukes-i-det-offentlige-rom/o/5-124-48030
by unhammer atMay 31, 2018 09:00
Mimes brønn er en nettjeneste som hjelper deg med å be om innsyn i offentlig forvaltning i tråd med offentleglova og miljøinformasjonsloven. Tjenesten har et offentlig tilgjengelig arkiv over alle svar som er kommet på innsynsforespørsler, slik at det offentlige kan slippe å svare på de samme innsynshenvendelsene gang på gang. Du finner tjenesten på
I følge gammel nordisk mytologi voktes kunnskapens kilde av Mime og ligger under en av røttene til verdenstreet Yggdrasil. Å drikke av vannet i Mimes brønn ga så verdifull kunnskap og visdom at den unge guden Odin var villig til å gi et øye i pant og bli enøyd for å få lov til å drikke av den.
Nettstedet vedlikeholdes av foreningen NUUG og er spesielt godt egnet for politisk interesserte personer, organisasjoner og journalister. Tjenesten er basert på den britiske søstertjenesten WhatDoTheyKnow.com, som allerede har gitt innsyn som har resultert i dokumentarer og utallige presseoppslag. I følge mySociety for noen år siden gikk ca 20 % av innsynshenvendelsene til sentrale myndigheter via WhatDoTheyKnow. Vi i NUUG håper NUUGs tjeneste Mimes brønn kan være like nyttig for innbyggerne i Norge.
I helgen ble tjenesten oppdatert med mye ny funksjonalitet. Den nye utgaven fungerer bedre på små skjermer, og viser nå leveringsstatus for henvendelsene slik at innsender enklere kan sjekke at mottakers epostsystem har bekreftet mottak av innsynshenvendelsen. Tjenesten er satt opp av frivillige i foreningen NUUG på dugnad, og ble lansert sommeren 2015. Siden den gang har 121 brukere sendt inn mer enn 280 henvendelser om alt fra bryllupsutleie av Operaen og forhandlinger om bruk av Norges topp-DNS-domene .bv til journalføring av søknader om bostøtte, og nettstedet er en liten skattekiste av interessant og nyttig informasjon. NUUG har knyttet til seg jurister som kan bistå med å klage på manglende innsyn eller sviktende saksbehandling.
– «NUUGs Mimes brønn var uvurderlig da vi lyktes med å sikre at DNS-toppdomenet .bv fortsatt er på norske hender,» forteller Håkon Wium Lie.
Tjenesten dokumenterer svært sprikende praksis i håndtering av innsynshenvendelser, både når det gjelder responstid og innhold i svarene. De aller fleste håndteres raskt og korrekt, men det er i flere tilfeller gitt innsyn i dokumenter der ansvarlig etat i ettertid ønsker å trekke innsynet tilbake, og det er gitt innsyn der sladdingen har vært utført på en måte som ikke skjuler informasjonen som skal sladdes.
– «Offentlighetsloven er en bærebjelke for vårt demokrati. Den bryr seg ikke med hvem som ber om innsyn, eller hvorfor. Prosjektet Mimes brønn innebærer en materialisering av dette prinsippet, der hvem som helst kan be om innsyn og klage på avslag, og hvor dokumentasjon gjøres offentlig. Dette gjør Mimes Brønn til et av de mest spennende åpenhetsprosjektene jeg har sett i nyere tid.» forteller mannen som fikk åpnet opp eierskapsregisteret til skatteetaten, Vegard Venli.
Vi i foreningen NUUG håper Mimes brønn kan være et nyttig verktøy for å holde vårt demokrati ved like.
by Mimes Brønn atFeb 13, 2017 14:07
Mimes brÃ¸nn har nÃ¥ vÃ¦rt oppe i rundt et Ã¥r. Derfor vi tenkte det kunne vÃ¦re interessant Ã¥ fÃ¥ en kortfattet statistikk om hvordan tjenesten er blitt brukt.
I begynnelsen av juli 2016 hadde Mimes brÃ¸nn 71 registrerte brukere som hadde sendt ut 120 innsynshenvendelser, hvorav 62 (52%) var vellykkede, 19 (16%) delvis vellykket, 14 (12%) avslÃ¥tt, 10 (8%) fikk svar at organet ikke hadde informasjonen, og 12 henvendelser (10%; 6 fra 2016, 6 fra 2015) fortsatt var ubesvarte. Et fÃ¥tall (3) av hendvendelsene kunne ikke kategoriseres. Vi ser derfor at rundt to tredjedeler av henvendelsene var vellykkede, helt eller delvis. Det er bra!
Tiden det tar fÃ¸r organet fÃ¸rst sender svar varierer mye, fra samme dag (noen henvendelser sendt til Utlendingsnemnda, Statens vegvesen, Ã˜kokrim, Mediatilsynet, Datatilsynet, BrÃ¸nnÃ¸ysundregistrene), opp til 6 mÃ¥neder (Ballangen kommune) eller lenger (Stortinget, Olje- og energidepartementet, Justis- og beredskapsdepartementet, UDI â€“ Utlendingsdirektoratet, og SSB har mottatt innsynshenvendelser som fortsatt er ubesvarte). Gjennomsnittstiden her var et par uker (med unntak av de 12 tilfellene der det ikke har kommet noe svar). Det fÃ¸lger av offentlighetsloven Â§ 29 fÃ¸rste ledd at henvendelser om innsyn i forvaltningens dokumenter skal besvares Â«uten ugrunnet oppholdÂ», noe som ifÃ¸lge Sivilombudsmannen i de fleste tilfeller skal fortolkes som Â«samme dag eller i alle fall i lÃ¸pet av 1-3 virkedagerÂ». SÃ¥ her er det rom for forbedring.
Klageretten (offentleglova Â§ 32) ble benyttet i 20 av innsynshenvendelsene. I de fleste (15; 75%) av tilfellene fÃ¸rte klagen til at henvendelsen ble vellykket. Gjennomsnittstiden for Ã¥ fÃ¥ svar pÃ¥ klagen var en mÃ¥ned (med unntak av 2 tillfeller, klager sendt til Statens vegvesen og Ruter AS, der det ikke har kommet noe svar). Det er vel verdt Ã¥ klage, og helt gratis! Sivilombudsmannen har uttalt at 2-3 uker ligger over det som er akseptabel saksbehandlingstid for klager.
Flest henvendelser var blitt sendt til Utenriksdepartementet (9), tett etterfulgt av Fredrikstad kommune og BrÃ¸nnÃ¸ysundregistrene. I alt ble henvendelser sendt til 60 offentlige myndigheter, hvorav 27 ble tilsendt to eller flere. Det stÃ¥r over 3700 myndigheter i databasen til Mimes brÃ¸nn. De fleste av dem har dermed til gode Ã¥ motta en innsynshenvendelse via tjenesten.
NÃ¥r vi ser pÃ¥ hva slags informasjon folk har bedt om, ser vi et bredt spekter av interesser; alt fra kommunens parkeringsplasser, reiseregninger der statens satser for overnatting er oversteget, korrespondanse om asylmottak og forhandlinger om toppdomenet .bv, til dokumenter om Myanmar.
Myndighetene gjÃ¸r alle mulige slags ting. Noe av det gjÃ¸res dÃ¥rlig, noe gjÃ¸r de bra. Jo mer vi finner ut om hvordanÂ myndighetene fungerer, jo stÃ¸rre mulighet har vi til Ã¥ foreslÃ¥ forbedringer pÃ¥ det som fungerer dÃ¥rlig… og applaudere det somÂ bra.Â Er det noe du vil ha innsyn i, sÃ¥ er det bare Ã¥ klikke pÃ¥ https://www.mimesbronn.no/ og sÃ¥ er du i gang
by Mimes Brønn atJul 15, 2016 15:56
Twitter-brukaren @IngeborgSteine fekk nyleg ein del merksemd dÃ¥ ho tvitra eit bilete av nynorskutgÃ¥va av Ã¸konomieksamenen sin ved NTNU:
Dette var min økonomieksamen på "nynorsk". #nynorsk #noregsmållag #kvaialledagar https://t.co/RjCKSU2Fyg—
Ingeborg Steine (@IngeborgSteine) May 30, 2016
Kreative nyvinningar som *kvisleis og alle dialektformene og arkaismane ville vore usannsynlege Ã¥ fÃ¥ i ei maskinomsett utgÃ¥ve, sÃ¥ dÃ¥ lurte eg pÃ¥ kor mykje betre/verre det hadde blitt om eksaminatoren rett og slett hadde brukt Apertium i staden? Ingeborg Steine var sÃ¥ hjelpsam at ho la ut bokmÃ¥lsutgÃ¥va, sÃ¥ dÃ¥ fÃ¥r me prÃ¸va
Ingen kvisleis og fritt for tÃ¦r og fyr, men det er heller ikkje perfekt: Visse ord manglar frÃ¥ ordbÃ¸kene og fÃ¥r dermed feil bÃ¸ying, teller blir tolka som substantiv, ein anna maskin har feil bÃ¸ying pÃ¥ fÃ¸rsteordet (det mangla ein regel der) og at blir ein stad tolka som adverb (som fÃ¸rer til det forunderlege fragmentet det verta at anteke tilvarande). I tillegg blir sprÃ¥ket gjenkjent som tatarisk av nettsida, sÃ¥ det var kanskje litt tung norsk? Men desse feila er ikkje spesielt vanskelege Ã¥ retta pÃ¥ â€“ utviklingsutgÃ¥va av Apertium gir no:
Det er enno eit par smÃ¥ting som kunne vore retta, men det er allereie betre enn dei fleste eksamenane eg fekk utdelt ved UiO â€¦
A complete feed is available in any of your favourite syndication formats linked by the buttons below.