Created by three guys who love BSD, we cover the latest news and have an extensive series of tutorials, as well as interviews with various people from all areas of the BSD community. It also serves as a platform for support and questions. We love and advocate FreeBSD, OpenBSD, NetBSD, DragonFlyBSD and TrueOS. Our show aims to be helpful and informative for new users that want to learn about them, but still be entertaining for the people who are already pros. The show airs on Wednesdays at 2:00PM (US Eastern time) and the edited version is usually up the following day.

Similar Podcasts

Thinking Elixir Podcast

Thinking Elixir Podcast
The Thinking Elixir podcast is a weekly show where we talk about the Elixir programming language and the community around it. We cover news and interview guests to learn more about projects and developments in the community.

The Cynical Developer

The Cynical Developer
A UK based Technology and Software Developer Podcast that helps you to improve your development knowledge and career, through explaining the latest and greatest in development technology and providing you with what you need to succeed as a developer.

Elixir Outlaws

Elixir Outlaws
Elixir Outlaws is an informal discussion about interesting things happening in Elixir. Our goal is to capture the spirit of a conference hallway discussion in a podcast.

178: Enjoy the Silence

January 25, 2017 1:19:10 57.0 MB Downloads: 0

This week on BSD Now, we will be discussing a wide variety of topics including Routers, Run-Controls, the “Rule” of silence and some This episode was brought to you by Headlines Ports no longer build on EOL FreeBSD versions (https://www.reddit.com/r/freebsd/comments/5ouvmp/ports_no_longer_build_on_eol_freebsd_versions/) The FreeBSD ports tree has been updated to automatically fail if you try to compile ports on EOL versions of FreeBSD (any version of 9.x or earlier, 10.0 - 10.2, or 11 from before 11.0) This is to prevent shooting yourself in the food, as the compatibility code for those older OSes has been removed now that they are no longer supported. If you use pkg, you will also run into problems on old releases. Packages are always built on the oldest supported release in a branch. Until recently, this meant packages for 10.1, 10.2, and 10.3 were compiled on 10.1. Now that 10.1 and 10.2 are EOL, packages for 10.x are compiled on 10.3. This matters because 10.3 supports the new openat() and various other *at() functions used by capsicum. Now that pkg and packages are built on a version that supports this new feature, they will not run on systems that do not support it. So pkg will exit with an error as soon as it tries to open a file. You can work around this temporarily by using the pkg-static command, but you should upgrade to a supported release immediately. *** Improving TrueOS: OpenRC (https://www.trueos.org/blog/improving-trueos-openrc/) With TrueOS moving to a rolling-release model, we’ve decided to be a bit more proactive in sharing news about new features that are landing. This week we’ve posted an article talking about the transition to OpenRC In past episodes you’ve heard me mention OpenRC, but hopefully today we can help answer any of those lingering questions you may still have about it The first thing always asked, is “What is OpenRC?” OpenRC is a dependency-based init system working with the system provided init program. It is used with several Linux distributions, including Gentoo and Alpine Linux. However, OpenRC was created by the NetBSD developer Roy Marples in one of those interesting intersections of Linux and BSD development. OpenRC’s development history, portability, and 2-clause BSD license make its integration into TrueOS an easy decision. Now that we know a bit about what it is, how does it behave differently than traditional RC? TrueOS now uses OpenRC to manage all system services, as opposed to FreeBSD’s RC. Instead of using rc.d for base system rc scripts, OpenRC uses init.d. Also, every service in OpenRC has its own user configuration file, located in /etc/conf.d/ for the base system and /usr/local/etc.conf.d/ for ports. Finally, OpenRC uses runlevels, as opposed to the FreeBSD single- or multi- user modes. You can view the services and their runlevels by typing $ rc-update show -v in a CLI. Also, TrueOS integrates OpenRC service management into SysAdm with the Service Manager tool One of the prime benefits of OpenRC is much faster boot-times, which is important in a portable world of laptops (and desktops as well). But service monitoring and crash detection are also important parts of what make OpenRC a substantial upgrade for TrueOS. Lastly people have asked us about migration, what is done, what isn’t? As of now almost all FreeBSD base system services have been migrated over. In addition most desktop-facing services required to run Lumina and the like are also ported. We are still going through the ports tree and converting legacy rc.d scripts to init.d, but the process takes time. Several new folks have begun contributing OpenRC scripts and we hope to have all the roughly 1k ports converted over this year. BSDRP Releases 1.70 (https://sourceforge.net/projects/bsdrp/files/BSD_Router_Project/1.70/) A new release of the BSD Router Project This distro is designed to replace high end routers, like those from Cisco and Juniper, with FreeBSD running on regular off-the-shelf server. Highlights: Upgraded to FreeBSD 11.0-STABLE r312663 (skip 11.0 for massive performance improvement) Re-Added: netmap-fwd (https://github.com/Netgate/netmap-fwd) Add FIBsync patch to netmap-fwd from Zollner Robert netmap pkt-gen supports IPv6, thanks to Andrey V. Elsukov (ae@freebsd.org) bird 1.6.3 (add BGP Large communities support) OpenVPN 2.4.0 (adds the high speed AEAD GCM cipher) All of the other packages have also been upgraded A lot of great work has been done on BSDRP, and it has also generated a lot of great benchmarks and testing that have resulted in performance increases and improved understanding of how FreeBSD networking scales across different CPU types and speeds *** DragonFlyBSD gets UEFI support (http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/7b1aa074fcd99442a1345fb8a695b62d01d9c7fd) This commit adds support for UEFI to the Dragonfly Installer, allowing new systems to be installed to boot from UEFI This script (http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/9d53bd00e9be53d6b893afd79111370ee0c053b0) provides a way to build a HAMMER filesystem that works with UEFI There is also a UEFI man page (http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/d195d5099328849c500d4a1b94d6915d3c72c71e) The install media (http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/5fa778d7b36ab0981ff9dcbd96c71ebf653a6a19) has also been updated to support booting from either UEFI or MBR, in the same way that the FreeBSD images work *** News Roundup The Rule of Silence (http://www.linfo.org/rule_of_silence.html) “The rule of silence, also referred to as the silence is golden rule, is an important part of the Unix philosophy that states that when a program has nothing surprising, interesting or useful to say, it should say nothing. It means that well-behaved programs should treat their users' attention and concentration as being valuable and thus perform their tasks as unobtrusively as possible. That is, silence in itself is a virtue.” This doesn’t mean a program cannot be verbose, it just means you have to ask it for the additional output, rather than having it by default “There is no single, standardized statement of the Unix philosophy, but perhaps the simplest description would be: "Write programs that are small, simple and transparent. Write them so that they do only one thing, but do it well and can work together with other programs." That is, the philosophy centers around the concepts of smallness, simplicity, modularity, craftsmanship, transparency, economy, diversity, portability, flexibility and extensibility.” “This philosophy has been fundamental to the the fact that Unix-like operating systems have been thriving for more than three decades, far longer than any other family of operating systems, and can be expected to see continued expansion of use in the years to come” “The rule of silence is one of the oldest and most persistent design rules of such operating systems. As intuitive as this rule might seem to experienced users of such systems, it is frequently ignored by the developers of other types of operating systems and application programs for them. The result is often distraction, annoyance and frustration for users.” “There are several very good reasons for the rule of silence: (1) One is to avoid cluttering the user's mind with information that might not be necessary or might not even be desired. That is, unnecessary information can be a distraction. Moreover, unnecessary messages generated by some operating systems and application programs are sometimes poorly worded, and can cause confusion or needless worry on the part of users.” No news is good news. When there is bad news, error messages should be descriptive, and ideally tell the user what they might do about the error. “A third reason is that command line programs (i.e., all-text mode programs) on Unix-like operating systems are designed to work together with pipes, i.e., the output from one program becomes the input of another program. This is a major feature of such systems, and it accounts for much of their power and flexibility. Consequently, it is important to have only the truly important information included in the output of each program, and thus in the input of the next program.” Have you ever had to try to strip out useless output so you could feed that data into another program? “The rule of silence originally applied to command line programs, because all programs were originally command line programs. However, it is just as applicable to GUI (graphical user interfaces) programs. That is, unnecessary and annoying information should be avoided regardless of the type of user interface.” “A example is the useless and annoying dialog boxes (i.e., small windows) that pop up on the display screen with with surprising frequency on some operating systems and programs. These dialog boxes contain some obvious, cryptic or unnecessary message and require the user to click on them in order to close them and proceed with work. This is an interruption of concentration and a waste of time for most users. Such dialog boxes should be employed only in situations in which some unexpected result might occur or to protect important data.” It goes on to make an analogy about Public Address systems. If too many unimportant messages, like advertisements, are sent over the PA system, people will start to ignore them, and miss the important announcements. *** The Tao of tmux (https://leanpub.com/the-tao-of-tmux/read) An interesting article floated across my news feed a few weeks back. It’s what essentially boils down to a book called the “Tao of tmux”, which immediately piqued my interest. My story may be similar to many of yours. I was initially raised on using screen, and screen only for my terminal session and multiplexing needs. Since then I’ve only had a passing interest in tmux, but its always been one of those utilities I felt was worthy of investing some more time into. (Especially when seeing some of the neat setups some of my peers have with it) Needless to say, this article has been bookmarked, and I’ve started digesting some of it, but thought it would be good to share with anybody else who finds them-self in a similar situation. The book starts off well, explaining in the simplest terms possible what Tmux really is, by comparing and contrasting it to something we are all familiar with, GUIS! Helpfully they also include a chart which explains some of the terms we will be using frequently when discussing tmux (https://leanpub.com/the-tao-of-tmux/read#leanpub-auto-window-manager-for-the-terminal) One of the things the author does recommend is also making sure you are up to speed on your Terminal knowledge. Before getting into tmux, a few fundamentals of the command line should be reviewed. Often, we’re so used to using these out of street smarts and muscle memory a great deal of us never see the relation of where these tools stand next to each other. Seasoned developers are familiar with zsh, Bash, iTerm2, konsole, /dev/tty, shell scripting, and so on. If you use tmux, you’ll be around these all the time, regardless of whether you’re in a GUI on a local machine or SSH’ing into a remote server. If you want to learn more about how processes and TTY’s work at the kernel level (data structures and all) the book The Design and Implementation of the FreeBSD Operating System (2nd Edition) by Marshall Kirk McKusick is nice. In particular, Chapter 4, Process Management and Section 8.6, Terminal Handling. The TTY demystified by Linus Åkesson (available online) dives into the TTY and is a good read as well. We had to get that shout-out of Kirk’s book in here ;) From here the boot/article takes us on a whirlwind journey of Sessions, Windows, Panes and more. Every control- command is covered, information on how to customize your statusbar, tips, tricks and the like. There’s far more here than we can cover in a single segment, but you are highly encouraged to bookmark this one and start your own adventure into the world of tmux. *** SDF Celebrates 30 years of service in 2017 (https://sdf.org/) HackerNews thread on SDF (https://news.ycombinator.com/item?id=13453774) “Super Dimension Fortress (SDF, also known as freeshell.org) is a non-profit public access UNIX shell provider on the Internet. It has been in continual operation since 1987 as a non-profit social club. The name is derived from the Japanese anime series The Super Dimension Fortress Macross; the original SDF server was a BBS for anime fans[1]. From its BBS roots, which have been well documented as part of the BBS: The Documentary project, SDF has grown into a feature-rich provider serving members around the world.” A public access UNIX system, it was many people’s first access to a UNIX shell. In the 90s, Virtual Machines were rare, the software to run them usually cost a lot of money and no one had very much memory to try to run two operating systems at the same time. So for many people, these type of shell accounts were the only way they could access UNIX without having to replace the OS on their only computer This is how I first started with UNIX, eventually moving to paying for access to bigger machines, and then buying my own servers and renting out shell accounts to host IRC servers and channel protection bots. “On June 16th, 1987 Ted Uhlemann (handle: charmin, later iczer) connected his Apple ][e's 300 baud modem to the phone line his mother had just given him for his birthday. He had published the number the night before on as many BBSes around the Dallas Ft. Worth area that he could and he waited for the first caller. He had a copy of Magic Micro BBS which was written in Applesoft BASIC and he named the BBS "SDF-1" after his favorite Japanimation series ROBOTECH (Macross). He hoped to draw users who were interested in anime, industrial music and the Church of the Subgenius.” I too started out in the world of BBSes before I had access to the internet. My parents got my a dedicated phone line for my birthday, so I wouldn’t tie up their line all the time. I quickly ended up running my own BBS, the Sudden Death BBS (Renegade (https://en.wikipedia.org/wiki/Renegade_(BBS)) on MS DOS) I credit this early experience for my discovery of a passion for Systems Administration, that lead me to my current career “Slowly, SDF has grown over all these years, never forgetting our past and unlike many sites on the internet, we actually have a past. Some people today may come here and see us as outdated and "retro". But if you get involved, you'll see it is quite alive with new ideas and a platform for opportunity to try many new things. The machines are often refreshed, the quotas are gone, the disk space is expanding as are the features (and user driven features at that) and our cabinets have plenty of space for expansion here in the USA and in Europe (Germany).” “Think about ways you'd like to celebrate SDF's 30th and join us on the 'bboard' to discuss what we could do. I realize many of you have likely moved on yourselves, but I just wanted you to know we're still here and we'll keep doing new and exciting things with a foundation in the UNIX shell.” *** Getting Minecraft to Run on NetBSD (https://www.reddit.com/r/NetBSD/comments/5mtsy1/getting_minecraft_to_run_on_netbsd/) One thing that doesn’t come up often on BSDNow is the idea of gaming. I realize most of us are server folks, or perhaps don’t play games (The PC is for work, use your fancy-smanzy PS4 and get off my lawn you kids) Today I thought it would be fun to highlight this post over at Reddit talking about running MineCraft on NetBSD Now I realize this may not be news to some of you, but perhaps it is to others. For the record my kids have been playing Minecraft on PC-BSD / TrueOS for years. It's the primary reason they are more often booted into that instead of Windows. (Funny story behind that - Got sick of all the 3rd party mods, which more often than not came helpfully bundled with viruses and malware) On NetBSD the process looks a bit different than on FreeBSD. First up, you’ll need to enable Linux Emulation and install Oracle JRE (Not OpenJDK, that path leads to sadness here) The guide will then walk us through the process of fetching the Linux runtime packages, extracting and then enabling bits such as ‘procfs’ that is required to run the Linux binaries. Once that's done, minecraft is only a simple “oracle8-jre /path/to/minecraft.jar” command away from starting up, and you’ll be “crafting” in no time. (Does anybody even play survival anymore?) *** Beastie Bits UNIX on the Computer Chronicals (https://youtu.be/g7P16mYDIJw) FreeBSD: Atheros AR9380 and later, maximum UDP TX goes from 250mbit to 355mbit. (https://twitter.com/erikarn/status/823298416939659264) Capsicumizing traceroute with casper (https://reviews.freebsd.org/D9303) Feedback/Questions Jason - TarSnap on Windows (http://pastebin.com/Sr1BTzVN) Mike - OpenRC & DO (http://pastebin.com/zpHyhHQG) Anonymous - Old Machines (http://pastebin.com/YnjkrDmk) Matt - Iocage (http://pastebin.com/pBUXtFak) Hjalti - Rclone & FreeNAS (http://pastebin.com/zNkK3epM)

177: Getting Pi on my Wifi

January 18, 2017 1:18:42 56.66 MB Downloads: 0

This week on BSDNow, we’ve got Wifi galore, a new iocage and some RPi3 news and guides to share. Stay tuned for your place to B...SD! This episode was brought to you by Headlines WiFi: 11n hostap mode added to athn(4) driver, testers wanted (http://undeadly.org/cgi?action=article&sid=20170109213803) “OpenBSD as WiFi access points look set to be making a comeback in the near future” “Stefan Sperling added 802.11n hostap mode, with full support initially for the Atheros chips supported by the athn(4) driver.” “Hostap performance is not perfect yet but should be no worse than 11a/b/g modes in the same environment.” “For Linux clients a fix for WME params is needed which I also posted to tech@” “This diff does not modify the known-broken and disabled ar9003 code, apart from making sure it still builds.” “I'm looking for both tests and OKs.” There has also been a flurry of work (http://svnweb.freebsd.org/base/head/sys/net80211/?view=log) in FreeBSD on the ath10k driver, which supports 802.11ac Like this one (https://svnweb.freebsd.org/base?view=revision&revision=310147) and this one (https://svnweb.freebsd.org/base?view=revision&revision=311579) The long-awaited iocage update has landed (https://github.com/iocage/iocage) We’ve hinted at the new things happening behind the scenes with iocage, and this last week the code has made its first public debut. So what’s changed you may ask. The biggest is that iocage has undergone a complete overhaul, moving from its original shell-base to python. The story behind that is that the author (Brandon) works at iXsystems, and the plan is to move away from the legacy warden-based jail management which was also shell-based. This new python re-write will allow it to integrate into FreeNAS (and other projects) better by exposing an API for all jail management tasks. Thats right, no more ugly CLI output parsing just to wrangle jail options either at creation or runtime. But what about users who just run iocage manually from the CLI? No worries, the new iocage is almost identical to the original CLI usage, making the switch over very simple. Just to re-cap, lets look at the new features list: “FEATURES: + Ease of use + Rapid jail creation within seconds + Automatic package installation + Virtual networking stacks (vnet) + Shared IP based jails (non vnet) + Transparent ZFS snapshot management + Export and import “ + The new iocage is available now via ports and packages under sysutils/py-iocage, give it a spin and be sure to report issues back to the developer(s). Reading DHT11 temperature sensors on a Raspberry Pi under FreeBSD (https://smallhacks.wordpress.com/2017/01/14/reading-dht11-temperature-sensor-on-raspberry-pi-under-freebsd/) “DHT-11 is a very cheap temperature/humidity sensor which is commonly used in the IoT devices. It is not very accurate, so for the accurate measurement i would recommend to use DHT21 instead. Anyway, i had DHT-11 in my tool box, so decided to start with it. DHT-11 using very simple 1 wire protocol – host is turning on chip by sending 18ms low signal to the data output and then reading 40 bytes of data.” “To read data from the chip it should be connected to the power (5v) and gpio pin. I used pin 2 as VCC, 6 as GND and 11 as GPIO” “There is no support for this device out of the box on FreeBSD. I found some sample code on the github, see lex/freebsd-gpio-dht11 (https://github.com/lex/freebsd-gpio-dht11) repository. This code was a good starting point, but soon i found 2 issues with it: Results are very unreliable, probably due to gpio decoding algorithm. Checksum is not validated, so sometime values are bogus. “Initially i was thinking to fix this myself, but later found kernel module for this purpose, 1 wire over gpio (http://www.my-tour.ru/FreeBSD/1-wire_over_gpio/). This module contains DHT11 kernel driver (gpio_sw) which implements DHT-11 protocol in the kernel space and exporting /dev/sw0 for the userland. Driver compiles on FreeBSD11/ARM without any changes. Use make install to install the driver.” The articles goes into how to install and configure the driver, including a set of devfs rules to allow non-root users to read from the sensor “Final goal was to add this sensor to the domoticz software. It is using LUA scripting to extend it functionality, e.g. to obtain data from non-supported or non standard devices. So, i decided to read /dev/sw0 from the LUA.” They ran into some trouble with LUA trying to read too much data at once, and had to work around it In the end, they got the results and were able to use them in the monitoring tool *** Tor-ified Home Network using HardenedBSD and a RPi3 (https://github.com/lattera/articles/blob/master/infosec/tor/2017-01-14_torified_home/article.md) Shawn from HardendBSD has posted an article up on GitHub talking about his deployment of a new Tor relay on a RPi3 This particular method was attractive, since it allows running a Relay, but without it being on a machine which may have personal data, such as SSH keys, files, etc While his setup is done on HardendBSD, the same applies to a traditional FreeBSD setup as well. First up, is the list of things needed for this project: Raspberry Pi 3 Model B Rev 1.2 (aka, RPI3) Serial console cable for the RPI3 Belkin F4U047 USB Ethernet Dongle Insignia NS-CR2021 USB 2.0 SD/MMC Memory Card Reader 32GB SanDisk Ultra PLUS MicroSDHC A separate system, running FreeBSD or HardenedBSD HardenedBSD clang 4.0.0 image for the RPI3 An external drive to be formatted A MicroUSB cable to power the RPI3 Two network cables Optional: Edimax N150 EW-7811Un Wireless USB Basic knowledge of vi After getting HBSD running on the RPi3 and serial connection established, he then takes us through the process of installing and enabling the various services needed. (Don’t forget to growfs your sdcard first!) Now the tricky part is that some of the packages needed to be compiled from ports, which is somewhat time-consuming on a RPi. He strongly recommends not compiling on the sdcard (it sounds like personal experience has taught him well) and to use iscsi or some external USB drive. With the compiling done, our package / software setup is nearly complete. Next up is firewalling the box, which he helpfully provides a full PF config setup that we can copy-n-paste here. The last bits will be enabling the torrc configuration knobs, which if you follow his example again, will result in a tor public relay, and a local transparent proxy for you. Bonus! Shawn helpfully provides DHCPD configurations, and even Wireless AP configurations, if you want to setup your RPi3 to proxy for devices that connect to it. *** News Roundup Unix Admin. Horror Story Summary, version 1.0 (http://www-uxsup.csx.cam.ac.uk/misc/horror.txt) A great collection of stories, many of which will ring true with our viewers The very first one, is about a user changing root’s shell to /usr/local/bin/tcsh but forgetting to make it executable, resulting in not being able to login as root. I too have run into this issue, in a slightly different way. I had tcsh as my user shell (back before tcsh was in base), and after a major OS upgrade, but before I had a chance to recompile all of my ports. Now I couldn’t ssh in to the remote machine in order to recompile my shell. Now I always use a shell included in the base system, and test it before rebooting after an upgrade. “Our operations group, a VMS group but trying to learn UNIX, was assigned account administration. They were cleaning up a few non-used accounts like they do on VMS - backup and purge. When they came across the account "sccs", which had never been accessed, away it went. The "deleteuser" utility from DEC asks if you would like to delete all the files in the account. Seems reasonable, huh? Well, the home directory for "sccs" is "/". Enough said :-(“ “I was working on a line printer spooler, which lived in /etc. I wanted to remove it, and so issued the command "rm /etc/lpspl." There was only one problem. Out of habit, I typed "passwd" after "/etc/" and removed the password file. Oops.” I’ve done things like this as well. Finger memory can be dangerous “I was happily churning along developing something on a Sun workstation, and was getting a number of annoying permission denieds from trying to write into a directory heirarchy that I didn't own. Getting tired of that, I decided to set the permissions on that subtree to 777 while I was working, so I wouldn't have to worry about it. Someone had recently told me that rather than using plain "su", it was good to use "su -", but the implications had not yet sunk in. (You can probably see where this is going already, but I'll go to the bitter end.) Anyway, I cd'd to where I wanted to be, the top of my subtree, and did su -. Then I did chmod -R 777. I then started to wonder why it was taking so damn long when there were only about 45 files in 20 directories under where I (thought) I was. Well, needless to say, su - simulates a real login, and had put me into root's home directory, /, so I was proceeding to set file permissions for the whole system to wide open. I aborted it before it finished, realizing that something was wrong, but this took quite a while to straighten out.” Where is a ZFS snapshot when you need it? *** How individual contributors get stuck (https://medium.com/@skamille/how-do-individual-contributors-get-stuck-63102ba43516) An interesting post looking at the common causes of people getting stuck when trying to create or contribute new code Brainstorming/architecture: “I must have thought through all edge cases of all parts of everything before I can begin this project” Researching possible solutions forever (often accompanied by desire to do a “bakeoff” where they build prototypes in different platforms/languages/etc) Refactoring: “this code could be cleaner and everything would be just so much easier if we cleaned this up… and this up… and…” Helping other people instead of doing their assigned tasks (this one isn’t a bad thing in an open source community) Working on side projects instead of the main project (it is your time, it is up to you how to spend it) Excessive testing (rare) Excessive automation (rare) Finish the last 10–20% of a project Start a project completely from scratch Do project planning (You need me to write what now? A roadmap?) (this is why FreeBSD has devsummits, some things you just need to whiteboard) Work with unfamiliar code/libraries/systems Work with other teams (please don’t make me go sit with data engineering!!) Talk to other people Ask for help (far beyond the point they realized they were stuck and needed help) Deal with surprises or unexpected setbacks Deal with vendors/external partners Say no, because they can’t seem to just say no (instead of saying no they just go into avoidance mode, or worse, always say yes) “Noticing how people get stuck is a super power, and one that many great tech leads (and yes, managers) rely on to get big things done. When you know how people get stuck, you can plan your projects to rely on people for their strengths and provide them help or even completely side-step their weaknesses. You know who is good to ask for which kinds of help, and who hates that particular challenge just as much as you do.” “The secret is that all of us get stuck and sidetracked sometimes. There’s actually nothing particularly “bad” about this. Knowing the ways that you get hung up is good because you can choose to either a) get over the fears that are sticking you (lack of knowledge, skills, or confidence), b) avoid such tasks as much as possible, and/or c) be aware of your habits and use extra diligence when faced with tackling these areas.” *** Make Docs! (http://www.mkdocs.org/) “MkDocs is a fast, simple and downright gorgeous static site generator that's geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.” “MkDocs builds completely static HTML sites that you can host on GitHub pages, Amazon S3, or anywhere else you choose” It is an easy to install python package It includes a server mode that auto-refreshes the page as you write the docs, making it easy to preview your work before you post it online Everything needs docs, and writing docs should be as simple as possible, so that more of them will get written Go write some docs! *** Experimental FreeNAS 11/12 builds (https://forums.freenas.org/index.php?threads/new-freenas-9-10-with-freebsd-11-12-for-testing.49696/#post-341941) We know there’s a lot of FreeNAS users who listen to BSDNow, so I felt it was important to share this little tidbit. I’ve posted something to the forums last night which includes links to brand-new spins of FreeNAS 9.10 based upon FreeBSD 11/stable and 12/current. These builds are updated nightly via our Jenkins infrastructure and hopefully will provide a new playground for technical folks and developers to experiment with FreeBSD features in their FreeNAS environment, long before they make it into a -STABLE release. As usual, the notes of caution do apply, these are nightlies, and as such bugs will abound. Do NOT use these with your production data, unless you are crazy, or just want an excuse to test your backup strategy If you do run these builds, of course feedback is welcome via the usual channels, such as the bug tracker. The hope is that by testing FreeBSD code earlier, we can vet and determine what is safe / ready to go into mainline FreeNAS sooner rather than later. *** Beastie Bits An Explainer on Unix’s Most Notorious Code Comment (http://thenewstack.io/not-expected-understand-explainer/) turn your network inside out with one pf.conf trick (http://www.tedunangst.com/flak/post/turn-your-network-inside-out-with-one-pfconf-trick) A story of if_get(9) (http://www.grenadille.net/post/2017/01/13/A-story-of-if_get%289%29) Apple re-affirms its commitment to LLVM/Clang (http://lists.llvm.org/pipermail/llvm-dev/2017-January/108953.html) python 3k17 (http://www.tedunangst.com/flak/post/python-3k17) 2017 presentation proposals (http://blather.michaelwlucas.com/archives/2848) NetBSD 7.1_RC1 available (http://mail-index.netbsd.org/netbsd-announce/2017/01/09/msg000259.html) #define FSUFS2MAGIC 0x19540119 (Happy Birthday to Kirk McKusick tomorrow) *** Feedback/Questions J - LetsEncrypt (http://pastebin.com/nnQ9ZgyN) Mike - OpenRC (http://pastebin.com/EZ4tRiVb) Timothy - ZFS Horror (http://pastebin.com/ZqDFTsnR) Troels (http://pastebin.com/dhZEnREM) Jason - Disk Label (http://pastebin.com/q4F95S6h) ***

176: Linking your world

January 11, 2017 1:32:24 66.53 MB Downloads: 0

Another exciting week on BSDNow, we are queued up with LLVM / Linking news, a look at NetBSD’s scheduler, This episode was brought to you by Headlines FreeBSD Kernel and World, and many Ports, can now be linked with lld (https://llvm.org/bugs/show_bug.cgi?id=23214#c40) “With this change applied I can link the entirety of the FreeBSD/amd64 base system (userland world and kernel) with LLD.” “Rafael's done an initial experimental Poudriere FreeBSD package build with lld head, and found almost 20K out of 26K ports built successfully. I'm now looking at getting CI running to test this on an ongoing basis. But, I think we're at the point where an experimental build makes sense.” Such testing will become much easier once llvm 4.0 is imported into -current “I suggest that during development we collect patches in a local git repo -- for example, I've started here for my Poudriere run https://github.com/emaste/freebsd-ports/commits/ports-lld” “It now looks like libtool is responsible for the majority of my failed / skipped ports. Unless we really think we'll add "not GNU" and other hacks to lld we're going to have to address libtool limitations upstream and in the FreeBSD tree. I did look into libtool a few weeks ago, but unfortunately haven't yet managed to produce a patch suitable for sending upstream.” If you are interested in LLVM/Clang/LLD/LLDB etc, check out: A Tourist’s Guide to the LLVM Source Code (http://blog.regehr.org/archives/1453) *** Documenting NetBSD's scheduler tweaks (http://www.feyrer.de/NetBSD/bx/blosxom.cgi/nb_20170109_2108.html) A followup to our previous coverage of improvements to the scheduler in NetBSD “NetBSD's scheduler was recently changed to better distribute load of long-running processes on multiple CPUs. So far, the associated sysctl tweaks were not documented, and this was changed now, documenting the kern.sched sysctls.” kern.sched.cacheht_time (dynamic): Cache hotness time in which a LWP is kept on one particular CPU and not moved to another CPU. This reduces the overhead of flushing and reloading caches. Defaults to 3ms. Needs to be given in ``hz'' units, see mstohz(9). kern.sched.balance_period (dynamic): Interval at which the CPU queues are checked for re-balancing. Defaults to 300ms. kern.sched.min_catch (dynamic): Minimum count of migratable (runable) threads for catching (stealing) from another CPU. Defaults to 1 but can be increased to decrease chance of thread migration between CPUs. It is important to have good documentation for these tunables, so that users can understand what it is they are adjusting *** FreeBSD Network Gateway on EdgeRouter Lite (http://codeghar.com/blog/freebsd-network-gateway-on-edgerouter-lite.html) “EdgeRouter Lite is a great device to run at the edge of a home network. It becomes even better when it's running FreeBSD. This guide documents how to setup such a gateway. There are accompanying git repos to somewhat automate the process as well.” “Colin Percival has written a great blog post on the subject, titled FreeBSD on EdgeRouter Lite - no serial port required (http://www.daemonology.net/blog/2016-01-10-FreeBSD-EdgeRouter-Lite.html) . In it he provides and describes a shell script to build a bootable image of FreeBSD to be run on ERL, available from GitHub in the freebsd-ERL-build (https://github.com/cperciva/freebsd-ERL-build/) repo. I have built a Vagrant-based workflow to automate the building of the drive image. It's available on GitHub in the freebsd-edgerouterlite-ansible (https://github.com/hamzasheikh/freebsd-edgerouterlite-ansible) repo. It uses the build script Percival wrote.” “Once you've built the disk image it's time to write it to a USB drive. There are two options: overwrite the original drive in the ERL or buy a new drive. I tried the second option first and wrote to a new Sandrive Ultra Fit 32GB USB 3.0 Flash Drive (SDCZ43-032G-GAM46). It did not work and I later found on some blog that those drives do not work. I have not tried another third party drive since.” The tutorial covers all of the steps, and the configuration files, including rc.conf, IP configuration, DHCP (and v6), pf, and DNS (unbound) “I'm pretty happy with ERL and FreeBSD. There is great community documentation on how to configure all the pieces of software that make a FreeBSD-based home network gateway possible. I can tweak things as needed and upgrade when newer versions become available.” “My plan on upgrading the base OS is to get a third party USB drive that works, write a newer FreeBSD image to it, and replace the drive in the ERL enclosure. This way I can keep a bunch of drives in rotation. Upgrades to newer builds or reverts to last known good version are as easy as swapping USB drives.” Although something more nanobsd style with 2 partitions on the one drive might be easier. “Configuration with Ansible means I don't have to manually do things again and again. As the configs change they'll be tracked in git so I get version control as well. ERL is simply a great piece of network hardware. I'm tempted to try Ubiquiti's WiFi products instead of a mixture of DD-WRT and OpenWRT devices I have now. But that is for another day and perhaps another blog post.” *** A highly portable build system targeting modern UNIX systems (https://github.com/michipili/bsdowl) An exciting new/old project is up on GitHub that we wanted to bring your attention to. BSD Owl is a highly portable build-system based around BSD Make that supports a variety of popular (and not so popular) languages, such as: C programs, compiled for several targets C libraries, static and shared, compiled for several targets Shell scripts Python scripts OCaml programs OCaml libraries, with ocamldoc documentation OCaml plugins TeX documents, prepared for several printing devices METAPOST figures, with output as PDF, PS, SVG or PNG, either as part of a TeX document or as standalone documents What about features you may ask? Well BSD Owl has plenty of those to go around: Support of compilation profiles Support of the parallel mode (at the directory level) Support of separate trees for sources and objects Support of architecture-dependant compilation options Support GNU autoconf Production of GPG-signed tarballs Developer subshell, empowered with project-specific scripts Literate programming using noweb Preprocessing with m4 As far as platform support goes, BSD Owl is tested on OSX / Debian Jesse and FreeBSD > 9. Future support for OpenBSD and NetBSD is planned, once they update their respective BSD Make binaries to more modern versions News Roundup find -delete in OpenBSD. Thanks to tedu@ OpenBSD will have this very handy flag to in the future. (https://marc.info/?l=openbsd-tech&m=148342051832692&w=2) OpenBSD’s find(1) utility will now support the -delete operation “This option is not posix (not like that's stopped find accumulating a dozen extensions), but it is in gnu and freebsd (for 20 years). it's also somewhat popular among sysadmins and blogs, etc. and perhaps most importantly, it nicely solves one of the more troublesome caveats of find (which the man page actually covers twice because it's so common and easy to screw up). So I think it makes a good addition.” The actual code was borrowed from FreeBSD Using the -delete option is much more performant than forking rm once for each file, and safer because there is no risk of mangling path names If you encounter a system without a -delete option, your best bet is to use the -print0 option of find, which will print each filename terminated by a null byte, and pipe that into xargs -0 rm This avoids any ambiguity caused by files with spaces in the names *** New version of the Lumina desktop released (https://lumina-desktop.org/version-1-2-0-released/) Just in time to kickoff 2017 we have a new release of Lumina Desktop (1.2.0) Some of the notable changes include fixes to make it easier to port to other platforms, and some features: New Panel Plugins: “audioplayer” (panel version of the desktop plugin with the same name): Allows the user to load/play audio files directly through the desktop itself. “jsonmenu” (panel version of the menu plugin with the same name): Allows an external utility/script to be used to generate a menu/contents on demand. New Menu Plugins: “lockdesktop”: Menu option for instantly locking the desktop session. New Utilities: lumina-archiver: This is a pure Qt5 front-end to the “tar” utility for managing/creating archives. This can also use the dd utility to burn a “*.img” file to a USB device for booting.“ Looks like the news already made its rounds to a few different sites, with Phoronix and Softpedia picking it up as well Phoronix (http://www.phoronix.com/scan.php?page=news_item&px=Lumina-1.2-Released) Softpedia (http://news.softpedia.com/news/lumina-1-2-desktop-environments-launches-for-trueos-with-various-enhancements-511495.shtml) TrueOS users running the latest updates are already on the pre-release version of 1.2.1, so nothing has to be done there to get the latest and greatest. dd is not a disk writing tool (http://www.vidarholen.net/contents/blog/?p=479) “If you’ve ever used dd, you’ve probably used it to read or write disk images:” > # Write myfile.iso to a USB drive > dd if=myfile.iso of=/dev/sdb bs=1M “Usage of dd in this context is so pervasive that it’s being hailed as the magic gatekeeper of raw devices. Want to read from a raw device? Use dd. Want to write to a raw device? Use dd. This belief can make simple tasks complicated. How do you combine dd with gzip? How do you use pv if the source is raw device? How do you dd over ssh?” “The fact of the matter is, dd is not a disk writing tool. Neither “d” is for “disk”, “drive” or “device”. It does not support “low level” reading or writing. It has no special dominion over any kind of device whatsoever.” Then a number of alternatives are discussed “However, this does not mean that dd is useless! The reason why people started using it in the first place is that it does exactly what it’s told, no more and no less. If an alias specifies -a, cp might try to create a new block device rather than a copy of the file data. If using gzip without redirection, it may try to be helpful and skip the file for not being regular. Neither of them will write out a reassuring status during or after a copy.” “dd, meanwhile, has one job*: copy data from one place to another. It doesn’t care about files, safeguards or user convenience. It will not try to second guess your intent, based on trailing slashes or types of files. When this is no longer a convenience, like when combining it with other tools that already read and write files, one should not feel guilty for leaving dd out entirely.” “dd is the swiss army knife of the open, read, write and seek syscalls. It’s unique in its ability to issue seeks and reads of specific lengths, which enables a whole world of shell scripts that have no business being shell scripts. Want to simulate a lseek+execve? Use dd! Want to open a file with O_SYNC? Use dd! Want to read groups of three byte pixels from a PPM file? Use dd!” “It’s a flexible, unique and useful tool, and I love it. My only issue is that, far too often, this great tool is being relegated to and inappropriately hailed for its most generic and least interesting capability: simply copying a file from start to finish.” “dd actually has two jobs: Convert and Copy. Legend has it that the intended name, “cc”, was taken by the C compiler, so the letters were shifted by one to give “dd”. This is also why we ended up with a Window system called X.” dd countdown (https://eriknstr.github.io/utils/dd-countdown.htm) *** Bhyve setup for tcp testing (https://www.strugglingcoder.info/index.php/bhyve-setup-for-tcp-testing/) FreeBSD Developer Hiren Panchasara writes about his setup to use bhyve to test changes to the TCP stack in FreeBSD “Here is how I test simple FreeBSD tcp changes with dummynet on bhyve. I’ve already wrote down how I do dummynet (https://www.strugglingcoder.info/index.php/drop-a-packet/) so I’ll focus on bhyve part.” “A few months back when I started looking into improving FreeBSD TCP’s response to packet loss, I looked around for traffic simulators which can do deterministic packet drop for me.” “I had used dummynet(4) before so I thought of using it but the problem is that it only provided probabilistic drops. You can specify dropping 10% of the total packets” So he wrote a quick hack, hopefully he’ll polish it up and get it committed “Setup: I’ll create 3 bhyve guests: client, router and server” “Both client and server need their routing tables setup correctly so that they can reach each other. The Dummynet node is the router / traffic shaping node here. We need to enable forwarding between interfaces: sysctl net.inet.ip.forwarding=1” “We need to setup links (called ‘pipes’) and their parameters on dummynet node” “For simulations, I run a lighttpd web-server on the server which serves different sized objects and I request them via curl or wget from the client. I have tcpdump running on any/all of four interfaces involved to observe traffic and I can see specified packets getting dropped by dummynet. sysctl net.inet.ip.dummynet.iopktdrop is incremented with each packet that dummynet drops.” “Here, 192.* addresses are for ssh and 10.* are for guests to be able to communicate within themselves.” Create 2 tap interfaces for each end point, and 3 from the router. One each for SSH/control, and the others for the test flows. Then create 3 bridges, the first includes all of the control tap interfaces, and your hosts’ real interface. This allows the guests to reach the internet to download packages etc. The other two bridges form the connections between the three VMs The creation and configuration of the VMs is documented in detail I used a setup very similar to this for teaching the basics of how TCP works when I was teaching at a local community college *** Beastie Bits Plan9 on Bhyve (https://twitter.com/pr1ntf/status/817895393824382976) Get your name in the relayd book (http://blather.michaelwlucas.com/archives/2832) Ted Unangst’s 2016 Computer Reviews (http://www.tedunangst.com/flak/post/2016-computer-review) Bryan Cantrill on Developer On Fire podcast (http://developeronfire.com/episode-198-bryan-cantrill-persistence-and-action) 2016 in review: pf/ipfw's impact on forwarding performance over time, on 8 core Atom (http://dev.bsdrp.net/benchs/2016.SM5018A-FTN4-Chelsio.png) #Wayland Weston with X and EGL clients, running on #FreeBSD in VBox with new scfb backend. More coming soon! (https://twitter.com/johalun/status/819039940914778112) Feedback/Questions Eddy - TRIM Partitioning (http://pastebin.com/A0LSipCj) Matt - Why FreeBSD? (http://pastebin.com/UE1k4Q99) Shawn - ZFS Horror? (http://pastebin.com/TjTkqHA4) Andrew - Bootloaders (http://pastebin.com/Baxd6Pjy) GELIBoot Paper (http://allanjude.com/talks/AsiaBSDCon2016_geliboot_pdf1a.pdf) FreeBSD Architecture Handbook (https://www.freebsd.org/doc/en_US.ISO8859-1/books/arch-handbook/boot.html) Bryan - ZFS Error (http://pastebin.com/NygwchFD) ***

175: How the Dtrace saved Christmas

January 04, 2017 1:37:29 70.19 MB Downloads: 0

This week on BSDNow, we’ve got all sorts of post-holiday goodies to share. New OpenSSL APIs, Dtrace, OpenBSD This episode was brought to you by Headlines OpenSSL 1.1 API migration path, or the lack thereof (https://www.mail-archive.com/tech@openbsd.org/msg36437.html) As many of you will already be aware, the OpenSSL 1.1.0 release intentionally introduced significant API changes from the previous release. In summary, a large number of data structures that were previously publically visible have been made opaque, with accessor functions being added in order to get and set some of the fields within these now opaque structs. It is worth noting that the use of opaque data structures is generally beneficial for libraries, since changes can be made to these data structures without breaking the ABI. As such, the overall direction of these changes is largely reasonable. However, while API change is generally necessary for progression, in this case it would appear that there is NO transition plan and a complete disregard for the impact that these changes would have on the overall open source ecosystem. So far it seems that the only approach is to place the migration burden onto each and every software project that uses OpenSSL, pushing significant code changes to each project that migrates to OpenSSL 1.1, while maintaining compatibility with the previous API. This is forcing each project to provide their own backwards compatibility shims, which is practically guaranteeing that there will be a proliferation of variable quality implementations; it is almost a certainty that some of these will contain bugs, potentially introducing security issues or memory leaks. I think this will be a bigger issue for other operating systems that do not have the flexibility of the ports tree to deliver a newer version of OpenSSL. If a project switches from the old API to the new API, and the OS only provides the older branch of OpenSSL, how can the application work? Of course, this leaves the issue, if application A wants OpenSSL 1.0, and application B only works with OpenSSL 1.1, how does that work? Due to a number of factors, software projects that make use of OpenSSL cannot simply migrate to the 1.1 API and drop support for the 1.0 API - in most cases they will need to continue to support both. Firstly, I am not aware of any platform that has shipped a production release with OpenSSL 1.1 - any software that supported OpenSSL 1.1 only, would effectively be unusable on every platform for the time being. Secondly, the OpenSSL 1.0.2 release is supported until the 31st of December 2019, while OpenSSL 1.1.0 is only supported until the 31st of August 2018 - any LTS style release is clearly going to consider shipping with 1.0.2 as a result. Platforms that are attempting to ship with OpenSSL 1.1 are already encountering significant challenges - for example, Debian currently has 257 packages (out of 518) that do not build against OpenSSL 1.1. There are also hidden gotchas for situations where different libraries are linked against different OpenSSL versions and then share OpenSSL data structures between them - many of these problems will be difficult to detect since they only fail at runtime. It will be interesting to see what happens with OpenSSL, and LibreSSL Hopefully, most projects will decide to switch to the cleaner APIs provided by s2n or libtls, although they do not provide the entire functionality of the OpenSSL API. Hacker News comments (https://news.ycombinator.com/item?id=13284648) *** exfiltration via receive timing (http://www.tedunangst.com/flak/post/exfiltration-via-receive-timing) Another similar way to create a backchannel but without transmitting anything is to introduce delays in the receiver and measure throughput as observed by the sender. All we need is a protocol with transmission control. Hmmm. Actually, it’s easier (and more reliable) to code this up using a plain pipe, but the same principle applies to networked transmissions. For every digit we want to “send” back, we sleep a few seconds, then drain the pipe. We don’t care about the data, although if this were a video file or an OS update, we could probably do something useful with it. Continuously fill the pipe with junk data. If (when) we block, calculate the difference between before and after. This is a our secret backchannel data. (The reader and writer use different buffer sizes because on OpenBSD at least, a writer will stay blocked even after a read depending on the space that opens up. Even simple demos have real world considerations.) In this simple example, the secret data (argv) is shared by the processes, but we can see that the writer isn’t printing them from its own address space. Nevertheless, it works. Time to add random delays and buffering to firewalls? Probably not. An interesting thought experiment that shows just how many ways there are to covertly convey a message *** OpenBSD Desktop in about 30 Minutes (https://news.ycombinator.com/item?id=13223351) Over at hackernews we have a very non-verbose, but handy guide to getting to a OpenBSD desktop in about 30 minutes! First, the guide will assume you’ve already installed OpenBSD 6.0, so you’ll need to at least be at the shell prompt of your freshly installed system to begin. With that, now its time to do some tuning. Editing some resource limits in login.conf will be our initial task, upping some datasize tunables to 2GB Next up, we will edit some of the default “doas” settings to something a bit more workable for desktop computing Another handy trick, editing your .profile to have your PKG_PATH variables set automatically will make One thing some folks may overlook, but disabling atime can speed disk performance (which you probably don’t care about atime on your desktop anyway), so this guide will show you what knobs to tweak in /etc/fstab to do so After some final WPA / Wifi configuration, we then drop to “mere mortal” mode and begin our package installations. In this particular guide, he will be setting up Lumina Desktop (Which yes, it is on OpenBSD) A few small tweaks later for xscreensaver and your xinitrc file, then you are ready to run “startx” and begin your desktop session! All in all, great guide which if you are fast can probably be done in even less than 30 minutes and will result in a rock-solid OpenBSD desktop rocking Lumina none-the-less. *** How DTrace saved Christmas (https://hackernoon.com/dtrace-at-home-145ba773371e) Adam Leventhal, one of the co-creators of DTrace, wrote up this post about how he uses DTrace at home, to save Christmas I had been procrastinating making the family holiday card. It was a combination of having a lot on my plate and dreading the formulation of our annual note recapping the year; there were some great moments, but I’m glad I don’t have to do 2016 again. It was just before midnight and either I’d make the card that night or leave an empty space on our friends’ refrigerators. Adobe Illustrator had other ideas: “Unable to set maximum number of files to be opened” I’m not the first person to hit this. The problem seems to have existed since CS6 was released in 2016. None of the solutions were working for me, and — inspired by Sara Mauskopf’s excellent post (https://medium.com/startup-grind/how-to-start-a-company-with-no-free-time-b70fbe7b918a#.uujdblxc6) — I was rapidly running out of the time bounds for the project. Enough; I’d just DTrace it. A colleague scoffed the other day, “I mean, how often do you actually use DTrace?” In his mind DTrace was for big systems, critical system, when dollars and lives were at stake. My reply: I use DTrace every day. I can’t imagine developing software without DTrace, and I use it when my laptop (not infrequently) does something inexplicable (I’m forever grateful to the Apple team that ported it to Mac OS X) Illustrator is failing on setrlimit(2) and blowing up as result. Let’s confirm that it is in fact returning -1:$ sudo dtrace -n 'syscall::setrlimit:return/execname == "Adobe Illustrato"/{ printf("%d %d", arg1, errno); }' dtrace: description 'syscall::setrlimit:return' matched 1 probe CPU ID FUNCTION:NAME 0 532 setrlimit:return -1 1 There it is. And setrlimit(2) is failing with errno 1 which is EPERM (value too high for non-root user). I already tuned up the files limit pretty high. Let’s confirm that it is in fact setting the files limit and check the value to which it’s being set. To write this script I looked at the documentation for setrlimit(2) (hooray for man pages!) to determine that the position of the resource parameter (arg0) and the type of the value parameter (struct rlimit). I needed the DTrace copyin() subroutine to grab the structure from the process’s address space: $ sudo dtrace -n 'syscall::setrlimit:entry/execname == "Adobe Illustrato"/{ this->r = *(struct rlimit *)copyin(arg1, sizeof (struct rlimit)); printf("%x %x %x", arg0, this->r.rlimcur, this->r.rlimmax); }' dtrace: description 'syscall::setrlimit:entry' matched 1 probe CPU ID FUNCTION:NAME 0 531 setrlimit:entry 1008 2800 7fffffffffffffff Looking through /usr/include/sys/resource.h we can see that 1008 corresponds to the number of files (RLIMITNOFILE | _RLIMITPOSIX_FLAG) The quickest solution was to use DTrace again to whack a smaller number into that struct rlimit. Easy: $ sudo dtrace -w -n 'syscall::setrlimit:entry/execname == "Adobe Illustrato"/{ this->i = (rlimt *)alloca(sizeof (rlimt)); *this->i = 10000; copyout(this->i, arg1 + sizeof (rlimt), sizeof (rlimt)); }' dtrace: description 'syscall::setrlimit:entry' matched 1 probe dtrace: could not enable tracing: Permission denied Oh right. Thank you SIP (System Integrity Protection). This is a new laptop (at least a new motherboard due to some bizarre issue) which probably contributed to Illustrator not working when once it did. Because it’s new I haven’t yet disabled the part of SIP that prevents you from using DTrace on the kernel or in destructive mode (e.g. copyout()). It’s easy enough to disable, but I’m reboot-phobic — I hate having to restart my terminals — so I went to plan B: lldb + After using DTrace to get the address of the setrlimit function, Adam used lldb to change the result before it got back to the application: (lldb) break set -n _init Breakpoint 1: 47 locations. (lldb) run … (lldb) di -s 0x1006e5b72 -c 1 0x1006e5b72: callq 0x1011628e0 ; symbol stub for: setrlimit (lldb) memory write 0x1006e5b72 0x31 0xc0 0x90 0x90 0x90 (lldb) di -s 0x1006e5b72 -c 4 0x1006e5b72: xorl %eax, %eax 0x1006e5b74: nop 0x1006e5b75: nop 0x1006e5b76: nop Next I just did a process detach and got on with making that holiday card… DTrace was designed for solving hard problems on critical systems, but the need to understand how systems behave exists in development and on consumer systems. Just because you didn’t write a program doesn’t mean you can’t fix it. News Roundup Say my Blog's name! (https://functionallyparanoid.com/2016/12/22/say-my-blogs-name/) Brian Everly over at functionally paranoid has a treat for us today. Let us give you a moment to get the tin-foil hats on… Ok, done? Let’s begin! He starts off with a look at physical security. He begins by listing your options: BIOS passwords – Not something I’m typically impressed with. Most can be avoided by opening up the machine, closing a jumper and powering it up to reset the NVRAM to factory defaults. I don’t even bother with them. Full disk encryption – This one really rings my bell in a positive way. If you can kill power to the box (either because the bad actor has to physically steal it and they aren’t carrying around a pile of car batteries and an inverter or because you can interrupt power to it some other way), then the disk will be encrypted. The other beauty of this is that if a drive fails (and they all do eventually) you don’t have to have any privacy concerns about chucking it into an electronics recycler (or if you are a bad, bad person, into a landfill) because that data is effectively gibberish without the key (or without a long time to brute force it). Two factor auth for logins – I like this one as well. I’m not a fan of biometrics because if your fingerprint is compromised (yes, it can happen – read (https://www.washingtonpost.com/news/federal-eye/wp/2015/07/09/hack-of-security-clearance-system-affected-21-5-million-people-federal-authorities-say/) about the department of defense background checks that were extracted by a bad agent – they included fingerprint images) you can’t exactly send off for a new finger. Things like the YubiKey (https://www.yubico.com/) are pretty slick. They require that you have the physical hardware key as well as the password so unless the bad actor lifted your physical key, they would have a much harder time with physical access to your hardware. Out of those options, Brian mentions that he uses disk encryption and yubi-key for all his secure network systems. Next up is network segmentation, in this case the first thing to do is change your admin password for any ISP supplied modem / router. He goes on to scare us of javascript attacks being used not against your local machine, but instead non WAN exposed router admin interface. Scary Stuff! For added security, naturally he firewalls the router by plugging in the LAN port to a OpenBSD box which does the 2nd layer of firewall / router protection. What about privacy and browsing? Here’s some more of his tips: I use Unbound as my DNS resolver on my local network (with all UDP port 53 traffic redirected to it by pf so I don’t have to configure anything on the clients) and then forward the traffic to DNSCrypt Proxy, caching the results in Unbound. I notice ZERO performance penalty for this and it greatly enhances privacy. This combination of Unbound and DNSCrypt Proxy works very well together. You can even have redundancy by having multiple upstream resolvers running on different ports (basically run the DNSCrypt Proxy daemon multiple times pointing to different public resolvers). I also use Firefox exclusively for my web browsing. By leveraging the tips on this page (https://www.privacytools.io/), you can lock it down to do a great job of privacy protection. The fact that your laptop’s battery drain rate can be used to fingerprint your browser completely trips me out but hey – that’s the world we live in.’ What about the cloud you may ask? Well Brian has a nice solution for that as well: I recently decided I would try to live a cloud-free life and I’ll give you a bit of a synopsis on it. I discovered a wonderful Open Source project called FreeNAS (http://www.freenas.org/). What this little gem does is allow you to install a FreeBSD/zfs file server appliance on amd64 hardware and have a slick administrative web interface for managing it. I picked up a nice SuperMicro motherboard and chassis that has 4 hot swap drive bays (and two internal bays that I used to mirror the boot volume on) and am rocking the zfs lifestyle! (Thanks Alan Jude!) One of the nicest features of the FreeNAS is that it provides the ability to leverage the FreeBSD jail functionality in an easy to use way. It also has plugins but the security on those is a bit sketchy (old versions of libraries, etc.) so I decided to roll my own. I created two jails – one to run OwnCloud (yeah, I know about NextCloud and might switch at some point) and the other to run a full SMTP/IMAP email server stack. I used Lets Encrypt (https://letsencrypt.org/) to generate the SSL certificates and made sure I hit an A on SSLLabs (https://www.ssllabs.com/) before I did anything else. His post then goes in to talk about Backups and IoT devices, something else you need to consider in this truely paranoid world we are forced to live in. We even get a nice shout-out near the end! Enter TarSnap (http://www.tarsnap.com/) – a company that advertises itself as “Online Backups for the Truly Paranoid”. It brings a tear to my eye – a kindred spirit! :-) Thanks again to Alan Jude and Kris Moore from the BSD Now podcast (http://www.bsdnow.tv/) for turning me onto this company. It has a very easy command syntax (yes, it isn’t a GUI tool – suck it up buttercup, you wanted to learn the shell didn’t you?) and even allows you to compile the thing from source if you want to.” We’ve only covered some of the highlights here, but you really should take a few moments of your time today and read this top to bottom. Lots of good tips here, already thinking how I can secure my home network better. The open source book: “Producing Open Source Software” (http://producingoss.com/en/producingoss.pdf) “How to Run a Successful Free Software Project” by Karl Fogel 9 chapters and over 200 pages of content, plus many appendices Some interesting topics include: Choosing a good name version control bug tracking creating developer guidelines setting up communications channels choosing a license (although this guide leans heavily towards the GPL) setting the tone of the project joining or creating a Non-Profit Organization the economics of open source release engineering, packaging, nightly builds, etc how to deal with forks A lot of good information packaged into this ebook This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License *** DTrace Flamegraphs for node.js on FreeBSD (http://www.venshare.com/dtrace-flamegraphs-for-freebsd-and-node-js-2/) One of the coolest tools built on top of DTrace is flamegraphs They are a very accurate, and visual way to see where a program is spending its time, which can tell you why it is slow, or where it could be improved. Further enhancements include off-cpu flame graphs, which tell you when the program is doing nothing, which can also be very useful > Recently BSD UNIXes are being acknowledged by the application development community as an interesting operating system to deploy to. This is not surprising given that FreeBSD had jails, the original container system, about 17 years ago and a lot of network focused businesses such as netflix see it as the best way to deliver content. This developer interest has led to hosting providers supporting FreeBSD. e.g. Amazon, Azure, Joyent and you can get a 2 months free instance at Digital Ocean. DTrace is another vital feature for anyone who has had to deal with production issues and has been in FreeBSD since version 9. As of FreeBSD 11 the operating system now contains some great work by Fedor Indutny so you can profile node applications and create flamegraphs of node.js processes without any additional runtime flags or restarting of processes. This is one of the most important things about DTrace. Many applications include some debugging functionality, but they require that you stop the application, and start it again in debugging mode. Some even require that you recompile the application in debugging mode. Being able to attach DTrace to an application, while it is under load, while the problem is actively happening, can be critical to figuring out what is going on. In order to configure your FreeBSD instance to utilize this feature make the following changes to the configuration of the server. Load the DTrace module at boot Increase some DTrace limits Install node with the optional DTrace feature compiled in Follow the generic node.js flamegraph tutorial (https://nodejs.org/en/blog/uncategorized/profiling-node-js/) > I hope you find this article useful. The ability to look at a runtime in this manor has saved me twice this year and I hope it will save you in the future too. My next post on freeBSD and node.js will be looking at some scenarios on utilising the ZFS features. Also check out Brendan Gregg’s ACM Queue Article (http://queue.acm.org/detail.cfm?id=2927301) “The Flame Graph: This visualization of software execution is a new necessity for performance profiling and debugging” SSHGuard 2.0 Call for Testing (https://sourceforge.net/p/sshguard/mailman/message/35580961/) SSHGuard is a tool for monitoring brute force attempts and blocking them It has been a favourite of mine for a while because it runs as a pipe from syslogd, rather than reading the log files from the disk A lot of work to get SSHGuard working with new log sources (journalctl, macOS log) and backends (firewalld, ipset) has happened in 2.0. The new version also uses a configuration file. Most importantly, SSHGuard has been split into several processes piped into one another (sshg-logmon | sshg-parser | sshg-blocker | sshg-fw). sshg-parser can run with capsicum(4) and pledge(2). sshg-blocker can be sandboxed in its default configuration (without pid file, whitelist, blacklisting) and has not been tested sandboxed in other configurations. Breaking the processes up so that the sensitive bits can be sandboxes is very nice to see *** Beastie Bits pjd’s 2007 paper from AsiaBSDCon: “Porting the ZFS file system to the FreeBSD operating system” (https://2007.asiabsdcon.org/papers/P16-paper.pdf) A Message From the FreeBSD Foundation (https://vimeo.com/user60888329) Remembering Roger Faulkner, Unix Champion (http://thenewstack.io/remembering-roger-faulkner/) and A few HN comments (including Bryan Cantrill) (https://news.ycombinator.com/item?id=13293596) Feedback/Questions Peter - TrueOS Network (http://pastebin.com/QtyJeHMk) Chris - Remote Desktop (http://pastebin.com/ru726VTV) Goetz - Geli on Serial (http://pastebin.com/LQZPgF5g) Joe - BGP (http://pastebin.com/jFeL8zKX) Alejandro - BSD Router (http://pastebin.com/Xq9cbmfn) ***

174: 2016 Highlights

December 29, 2016 2:55:34 84.27 MB Downloads: 0

A look back at 2016 This episode was brought to you by LinksZFS in the trenches | BSD Now 123One small step for DRM, one giant leap for BSD | BSD Now 143The Laporte has landed! | BSD Now 152Ham, Radio & Pie, Oh My! | BSD Now 158The Foundation of NetBSD | BSD Now 162Return of the Cantrill | BSD Now 163

173: Carry on my Wayland son

December 21, 2016 1:14:38 53.73 MB Downloads: 0

This week on the show, we’ve got some great stories to bring you, a look at the odder side of UNIX history This episode was brought to you by Headlines syspatch in testing state (http://marc.info/?l=openbsd-tech&m=148058309126053&w=2) Antoine Jacoutot ajacoutot@ openbsd has posted a call for testing for OpenBSD’s new syspatch tool “syspatch(8), a "binary" patch system for -release is now ready for early testing. This does not use binary diffing to update the system, but regular signed tarballs containing the updated files (ala installer).” “I would appreciate feedback on the tool. But please send it directly to me, there's no need to pollute the list. This is obviously WIP and the tool may or may not change in drastic ways.” “These test binary patches are not endorsed by the OpenBSD project and should not be trusted, I am only providing them to get early feedback on the tool. If all goes as planned, I am hoping that syspatch will make it into the 6.1 release; but for it to happen, I need to know how it breaks your systems :-)” Instructions (http://syspatch.openbsd.org/pub/OpenBSD/6.0/syspatch/amd64/README.txt) If you test it, report back and let us know how it went *** Weston working (https://lists.freebsd.org/pipermail/freebsd-current/2016-December/064198.html) Over the past few years we’ve had some user-interest in the state of Wayland / Weston on FreeBSD. In the past day or so, Johannes Lundberg has sent in a progress report to the FreeBSD mailing lists. Without further ADO: We had some progress with Wayland that we'd like to share. Wayland (v1.12.0) Working Weston (v1.12.0) Working (Porting WIP) Weston-clients (installed with wayland/weston port) Working XWayland (run X11 apps in Wayland compositor) Works (maximized window only) if started manually but not when launching X11 app from Weston. Most likely problem with Weston IPC. Sway (i3-compatible Wayland compositor) Working SDL20 (Wayland backend) games/stonesoup-sdl briefly tested. https://twitter.com/johalun/status/811334203358867456 GDM (with Wayland) Halted - depends on logind. GTK3 gtk3-demo runs fine on Weston (might have to set GDK_BACKEND=wayland first. GTK3 apps working (gedit, gnumeric, xfce4-terminal tested, xfce desktop (4.12) does not yet support GTK3)“ Johannes goes on to give instructions on how / where you can fetch their WiP and do your own testing. At the moment you’ll need Matt Macy’s newer Intel video work, as well as their ports tree which includes all the necessary software bits. Before anybody asks, yes we are watching this for TrueOS! *** Where the rubber meets the road (part two) (https://functionallyparanoid.com/2016/12/15/where-the-rubber-meets-the-road-part-two/) Continuing with our story from Brian Everly from a week ago, we have an update today on the process to dual-boot OpenBSD with Arch Linux. As we last left off, Arch was up and running on the laptop, but some quirks in the hardware meant OpenBSD would take a bit longer. With those issues resolved and the HD seen again, the next issue that reared its head was OpenBSD not seeing the partition tables on the disk. After much frustration, it was time to nuke and pave, starting with OpenBSD first this time. After a successful GPT partitioning and install of OpenBSD, he went back to installing Arch, and then the story got more interesting. “I installed Arch as I detailed in my last post; however, when I fired up gdisk I got a weird error message: “Warning! Disk size is smaller than the main header indicates! Loading secondary header from the last sector of the disk! You should use ‘v’ to verify disk integrity, and perhaps options on the expert’s menu to repair the disk.” Immediately after this, I saw a second warning: “Caution: Invalid backup GPT header, but valid main header; regenerating backup header from main header.” And, not to be outdone, there was a third: “Warning! Main and backup partition tables differ! Use the ‘c’ and ‘e’ options on the recovery & transformation menu to examine the two tables.” Finally (not kidding), there was a fourth: “Warning! One or more CRCs don’t match. You should repair the disk!” Given all of that, I thought to myself, “This is probably why I couldn’t see the disk properly when I partitioned it under Linux on the OpenBSD side. I’ll let it repair things and I should be good to go.” I then followed the recommendation and repaired things, using the primary GPT table to recreate the backup one. I then installed Arch and figured I was good to go.“ After confirming through several additional re-installs that the behavior was reproducible, he then decided to go full on crazy,and partition with MBR. That in and of itself was a challenge, since as he mentions, not many people dual-boot OpenBSD with Linux on MBR, especially using luks and lvm! If you want to see the details on how that was done, check it out. The story ends in success though! And better yet: “Now that I have everything working, I’ll restore my config and data to Arch, configure OpenBSD the way I like it and get moving. I’ll take some time and drop a note on the tech@ mailing list for OpenBSD to see if they can figure out what the GPT problem was I was running into. Hopefully it will make that part of the code stronger to get an edge-case bug report like this.” Take note here, if you run into issues like this with any OS, be sure to document in detail what happened so developers can explore solutions to the issue. *** FreeBSD and ZFS as a time capsule for OS X (https://blog.feld.me/posts/2016/12/using-freebsd-as-a-time-capsule-for-osx/) Do you have any Apple users in your life? Perhaps you run FreeBSD for ZFS somewhere else in the house or office. Well today we have a blog post from Mark Felder which shows how you can use FreeBSD as a time-capsule for your OSX systems. The setup is quite simple, to get started you’ll need packages for netatalk3 and avahi-app for service discovery. Next up will be your AFP configuration. He helpfully provides a nice example that you should be able to just cut-n-paste. Be sure to check the hosts allow lines and adjust to fit your network. Also of note will be the backup location and valid users to adjust. A little easier should be the avahi setup, which can be a straight copy-n-paste from the site, which will perform the service advertisements. The final piece is just enabling specific services in /etc/rc.conf and either starting them by hand, or rebooting. At this point your OSX systems should be able to discover the new time-capsule provider on the network and DTRT. *** News Roundup netbenches - FreeBSD network forwarding performance benchmark results (https://github.com/ocochard/netbenches) Olivier Cochard-Labbé, original creator of FreeNAS, and leader of the BSD Router Project, has a github repo of network benchmarks There are many interesting results, and all of the scripts, documentation, and configuration files to run the tests yourself IPSec Performance on an Atom C2558, 12-head vs IPSec Performance Branch (https://github.com/ocochard/netbenches/tree/master/Atom_C2558_4Cores-Intel_i350/ipsec/results/fbsd12.projects-ipsec.equilibrium) Compared to: Xeon L5630 2.13GHz (https://github.com/ocochard/netbenches/tree/2f3bb1b3c51e454736f1fcc650c3328071834f8d/Xeon_L5630-4Cores-Intel_82599EB/ipsec/results/fbsd11.0) and IPSec with Authentication (https://github.com/ocochard/netbenches/tree/305235114ba8a3748ad9681c629333f87f82613a/Atom_C2558_4Cores-Intel_i350/ipsec.ah/results/fbsd12.projects-ipsec.equilibrium) I look forward to seeing tests on even more hardware, as people with access to different configurations try out these benchmarks *** A tcpdump Tutorial and Primer with Examples (https://danielmiessler.com/study/tcpdump/) Most users will be familiar with the basics of using tcpdump, but this tutorial/primer is likely to fill in a lot of blanks, and advance many users understanding of tcpdump “tcpdump is the premier network analysis tool for information security professionals. Having a solid grasp of this über-powerful application is mandatory for anyone desiring a thorough understanding of TCP/IP. Many prefer to use higher level analysis tools such as Wireshark, but I believe this to usually be a mistake.” tcpdump is an important tool for any system or network administrator, it is not just for security. It is often the best way to figure out why the network is not behaving as expected. “In a discipline so dependent on a true understanding of concepts vs. rote learning, it’s important to stay fluent in the underlying mechanics of the TCP/IP suite. A thorough grasp of these protocols allows one to troubleshoot at a level far beyond the average analyst, but mastery of the protocols is only possible through continued exposure to them.” Not just that, but TCP/IP is a very interesting protocol, considering how little it has changed in its 40+ year history “First off, I like to add a few options to the tcpdump command itself, depending on what I’m looking at. The first of these is -n, which requests that names are not resolved, resulting in the IPs themselves always being displayed. The second is -X, which displays both hex and ascii content within the packet.” “It’s also important to note that tcpdump only takes the first 96 bytes of data from a packet by default. If you would like to look at more, add the -s number option to the mix, where number is the number of bytes you want to capture. I recommend using 0 (zero) for a snaplength, which gets everything.” The page has a nice table of the most useful options It also has a great primer on doing basic filtering If you are relatively new to using tcpdump, I highly recommend you spend a few minutes reading through this article *** How Unix made it to the top (http://minnie.tuhs.org/pipermail/tuhs/2016-December/007519.html) Doug McIlroy gives us a nice background post on how “Unix made it to the top” It’s fairly short / concise, so I felt it would be good to read in its entirety. “It has often been told how the Bell Labs law department became the first non-research department to use Unix, displacing a newly acquired stand-alone word-processing system that fell short of the department's hopes because it couldn't number the lines on patent applications, as USPTO required. When Joe Ossanna heard of this, he told them about roff and promised to give it line-numbering capability the next day. They tried it and were hooked. Patent secretaries became remote members of the fellowship of the Unix lab. In due time the law department got its own machine. Less well known is how Unix made it into the head office of AT&T. It seems that the CEO, Charlie Brown, did not like to be seen wearing glasses when he read speeches. Somehow his PR assistant learned of the CAT phototypesetter in the Unix lab and asked whether it might be possible to use it to produce scripts in large type. Of course it was. As connections to the top never hurt, the CEO's office was welcomed as another ouside user. The cost--occasionally having to develop film for the final copy of a speech--was not onerous. Having teethed on speeches, the head office realized that Unix could also be useful for things that didn't need phototypesetting. Other documents began to accumulate in their directory. By the time we became aware of it, the hoard came to include minutes of AT&T board meetings. It didn't seem like a very good idea for us to be keeping records from the inner sanctum of the corporation on a computer where most everybody had super-user privileges. A call to the PR guy convinced him of the wisdom of keeping such things on their own premises. And so the CEO's office bought a Unix system. Just as one hears of cars chosen for their cupholders, so were theseusers converted to Unix for trivial reasons: line numbers and vanity.“ Odd Comments and Strange Doings in Unix (http://orkinos.cmpe.boun.edu.tr/~kosar/odd.html) Everybody loves easter-eggs, and today we have some fun odd ones from the history throughout UNIX told by Dennis Ritchie. First up, was a fun one where the “mv” command could sometimes print the following “values of b may give rise to dom!” “Like most of the messages recorded in these compilations, this one was produced in some situation that we considered unlikely or as result of abuse; the details don't matter. I'm recording why the phrase was selected. The very first use of Unix in the "real business" of Bell Labs was to type and produce patent applications, and for a while in the early 1970s we had three typists busily typing away in the grotty lab on the sixth floor. One day someone came in and observed on the paper sticking out of one of the Teletypes, displayed in magnificent isolation, this ominous phrase: values of b may give rise to dom! It was of course obvious that the typist had interrupted a printout (generating the "!" from the ed editor) and moved up the paper, and that the context must have been something like "varying values of beta may give rise to domain wall movement" or some other fragment of a physically plausible patent application.But the phrase itself was just so striking! Utterly meaningless, but it looks like what... a warning? What is "dom?" At the same time, we were experimenting with text-to-voice software by Doug McIlroy and others, and of course the phrase was tried out with it. For whatever reason, its rendition of "give rise to dom!" accented the last word in a way that emphasized the phonetic similarity between "doom" and the first syllable of "dominance." It pronounced "beta" in the British style, "beeta." The entire occurrence became a small, shared treasure.The phrase had to be recorded somewhere, and it was, in the v6 source. Most likely it was Bob Morris who did the deed, but it could just as easily have been Ken. I hope that your browser reproduces the b as a Greek beta.“ Next up is one you might have heard before: /* You are not expected to understand this */> Every now and then on Usenet or elsewhere I run across a reference to a certain comment in the source code of the Sixth Edition Unix operating system. I've even been given two sweatshirts that quote it. Most probably just heard about it, but those who saw it in the flesh either had Sixth Edition Unix (ca. 1975) or read the annotated version of this system by John Lions (which was republished in 1996: ISBN 1-57298-013-7, Peer-to-Peer Communications).It's often quoted as a slur on the quantity or quality of the comments in the Bell Labs research releases of Unix. Not an unfair observation in general, I fear, but in this case unjustified. So we tried to explain what was going on. "You are not expected to understand this" was intended as a remark in the spirit of "This won't be on the exam," rather than as an impudent challenge. There’s a few other interesting stories as well, if the odd/fun side of UNIX history at all interests you, I would recommend checking it out. Beastie Bits With patches in review the #FreeBSD base system builds 100% reproducibly (https://twitter.com/ed_maste/status/811289279611682816) BSDCan 2017 Call for Participation (https://www.freebsdfoundation.org/news-and-events/call-for-papers/bsdcan-2017/) ioCell 2.0 released (https://github.com/bartekrutkowski/iocell/releases) who even calls link_ntoa? (http://www.tedunangst.com/flak/post/who-even-calls-link-ntoa) Booting Androidx86 under bhyve (https://twitter.com/pr1ntf/status/809528845673996288) Feedback/Questions Chris - VNET (http://pastebin.com/016BfvU9) Brian - Package Base (http://pastebin.com/8JJeHuRT) Wim - TrueOS Desktop All-n-one (http://pastebin.com/VC0DPQUF) Daniel - Long Boots (http://pastebin.com/q7pFu7pR) Bryan - ZFS / FreeNAS (http://pastebin.com/xgUnbzr7) Bryan - FreeNAS Security (http://pastebin.com/qqCvVTLB) ***

172: A tale of BSD from yore

December 14, 2016 1:30:09 64.91 MB Downloads: 0

This week on BSDNow, we have a very special guest joining us to tell us a tale of the early days in BSD history. That plus some new OpenSSH goodness, shell scripting utilities and much more. Stay tuned for your place to B...SD! This episode was brought to you by Headlines Call For Testing: OpenSSH 7.4 (http://marc.info/?l=openssh-unix-dev&m=148167688911316&w=2) Getting ready to head into the holidays for for the end of 2016 means some of us will have spare time on our hands. What a perfect time to get some call for testing work done! Damien Miller has issued a public CFT for the upcoming OpenSSH 7.4 release, which considering how much we all rely on SSH I would expect will get some eager volunteers for testing. What are some of the potential breakers? “* This release removes server support for the SSH v.1 protocol. ssh(1): Remove 3des-cbc from the client's default proposal. 64-bit block ciphers are not safe in 2016 and we don't want to wait until attacks like SWEET32 are extended to SSH. As 3des-cbc was the only mandatory cipher in the SSH RFCs, this may cause problems connecting to older devices using the default configuration, but it's highly likely that such devices already need explicit configuration for key exchange and hostkey algorithms already anyway. sshd(8): Remove support for pre-authentication compression. Doing compression early in the protocol probably seemed reasonable in the 1990s, but today it's clearly a bad idea in terms of both cryptography (cf. multiple compression oracle attacks in TLS) and attack surface. Pre-auth compression support has been disabled by default for >10 years. Support remains in the client. ssh-agent will refuse to load PKCS#11 modules outside a whitelist of trusted paths by default. The path whitelist may be specified at run-time. sshd(8): When a forced-command appears in both a certificate and an authorized keys/principals command= restriction, sshd will now refuse to accept the certificate unless they are identical. The previous (documented) behaviour of having the certificate forced-command override the other could be a bit confusing and error-prone. sshd(8): Remove the UseLogin configuration directive and support for having /bin/login manage login sessions.“ What about new features? 7.4 has some of those to wake you up also: “* ssh(1): Add a proxy multiplexing mode to ssh(1) inspired by the version in PuTTY by Simon Tatham. This allows a multiplexing client to communicate with the master process using a subset of the SSH packet and channels protocol over a Unix-domain socket, with the main process acting as a proxy that translates channel IDs, etc. This allows multiplexing mode to run on systems that lack file- descriptor passing (used by current multiplexing code) and potentially, in conjunction with Unix-domain socket forwarding, with the client and multiplexing master process on different machines. Multiplexing proxy mode may be invoked using "ssh -O proxy ..." sshd(8): Add a sshdconfig DisableForwaring option that disables X11, agent, TCP, tunnel and Unix domain socket forwarding, as well as anything else we might implement in the future. Like the 'restrict' authorizedkeys flag, this is intended to be a simple and future-proof way of restricting an account. sshd(8), ssh(1): Support the "curve25519-sha256" key exchange method. This is identical to the currently-support method named "curve25519-sha256@libssh.org". sshd(8): Improve handling of SIGHUP by checking to see if sshd is already daemonised at startup and skipping the call to daemon(3) if it is. This ensures that a SIGHUP restart of sshd(8) will retain the same process-ID as the initial execution. sshd(8) will also now unlink the PidFile prior to SIGHUP restart and re-create it after a successful restart, rather than leaving a stale file in the case of a configuration error. bz#2641 sshd(8): Allow ClientAliveInterval and ClientAliveCountMax directives to appear in sshd_config Match blocks. sshd(8): Add %-escapes to AuthorizedPrincipalsCommand to match those supported by AuthorizedKeysCommand (key, key type, fingerprint, etc.) and a few more to provide access to the contents of the certificate being offered. Added regression tests for string matching, address matching and string sanitisation functions. Improved the key exchange fuzzer harness.“ Get those tests done and be sure to send feedback, both positive and negative. *** How My Printer Caused Excessive Syscalls & UDP Traffic (https://zinascii.com/2014/how-my-printer-caused-excessive-syscalls.html) “3,000 syscalls a second, on an idle machine? That doesn’t seem right. I just booted this machine. The only processes running are those required to boot the SmartOS Global Zone, which is minimal.” This is a story from 2014, about debugging a machine that was being slowed down by excessive syscalls and UDP traffic. It is also an excellent walkthrough of the basics of DTrace “Well, at least I have DTrace. I can use this one-liner to figure out what syscalls are being made across the entire system.” dtrace -n 'syscall:::entry { @[probefunc,probename] = count(); }' “Wow! That is a lot of lwpsigmask calls. Now that I know what is being called, it’s time to find out who is doing the calling? I’ll use another one-liner to show me the most common user stacks invoking lwpsigmask.” dtrace -n 'syscall::lwp_sigmask:entry { @[ustack()] = count(); }' “Okay, so this mdnsd code is causing all the trouble. What is the distribution of syscalls for the mdnsd program?” dtrace -n 'syscall:::entry /execname == "mdnsd"/ { @[probefunc] = count(); } tick-1s { exit(0); }' “Lots of signal masking and polling. What the hell! Why is it doing this? What is mdnsd anyways? Is there a man page? Googling for mdns reveals that it is used for resolving host names in small networks, like my home network. It uses UDP, and requires zero configuration. Nothing obvious to explain why it’s flipping out. I feel helpless. I turn to the only thing I can trust, the code.” “Woah boy, this is some messy looking code. This would not pass illumos cstyle checks. Turns out this is code from Darwin—the kernel of OSX.” “Hmmm…an idea pops into my computer animal brain. I wonder…I wonder if my MacBook is also experiencing abnormal syscall rates? Nooo, that can’t be it. Why would both my SmartOS server and MacBook both have the same problem? There is no good technical reason to link these two. But, then again, I’m dealing with computers here, and I’ve seen a lot of strange things over the years—I switch to my laptop.” sudo dtrace -n 'syscall::: { @[execname] = count(); } tick-1s { exit(0); }' Same thing, except mdns is called discoverd on OS X “I ask my friend Steve Vinoski to run the same DTrace one-liner on his OSX machines. He has both Yosemite and the older Mountain Lion. But, to my dismay, neither of his machines are exhibiting high syscall rates. My search continues.” “Not sure what to do next, I open the OSX Activity Monitor. In desperation I click on the Network tab.” “ HOLE—E—SHIT! Two-Hundred-and-Seventy Million packets received by discoveryd. Obviously, I need to stop looking at code and start looking at my network. I hop back onto my SmartOS machine and check network interface statistics.” “Whatever is causing all this, it is sending about 200 packets a second. At this point, the only thing left to do is actually inspect some of these incoming packets. I run snoop(1M) to collect events on the e1000g0 interface, stopping at about 600 events. Then I view the first 15.” “ A constant stream of mDNS packets arriving from IP 10.0.1.8. I know that this IP is not any of my computers. The only devices left are my iPhone, AppleTV, and Canon printer. Wait a minute! The printer! Two days earlier I heard some beeping noises…” “I own a Canon PIXMA MG6120 printer. It has a touch interface with a small LCD at the top, used to set various options. Since it sits next to my desk I sometimes lay things on top of it like a book or maybe a plate after I’m done eating. If I lay things in the wrong place it will activate the touch interface and cause repeated pressing. Each press makes a beeping noise. If the object lays there long enough the printer locks up and I have to reboot it. Just such events occurred two days earlier.” “I fire up dladm again to monitor incoming packets in realtime. Then I turn to the printer. I move all the crap off of it: two books, an empty plate, and the title for my Suzuki SV650 that I’ve been meaning to sell for the last year. I try to use the touch screen on top of the printer. It’s locked up, as expected. I cut power to the printer and whip my head back to my terminal.” No more packet storm “Giddy, I run DTrace again to count syscalls.” “I’m not sure whether to laugh or cry. I laugh, because, LOL computers. There’s some new dumb shit you deal with everyday, better to roll with the punches and laugh. You live longer that way. At least I got to flex my DTrace muscles a bit. In fact, I felt a bit like Brendan Gregg when he was debugging why OSX was dropping keystrokes.” “I didn’t bother to root cause why my printer turned into a UDP machine gun. I don’t intend to either. I have better things to do, and if rebooting solves the problem then I’m happy. Besides, I had to get back to what I was trying to do six hours before I started debugging this damn thing.” There you go. The Internet of Terror has already been on your LAN for years. Making Getaddrinfo Concurrent in Python on Mac OS and BSD (https://emptysqua.re/blog/getaddrinfo-cpython-mac-and-bsd/) We have a very fun blog post today to pass along originally authored by “A. Jesse Jiryu Davis”. Specifically the tale of one man’s quest to unify the Getaddrinfo in Python with Mac OS and BSD. To give you a small taste of this tale, let us pass along just the introduction “Tell us about the time you made DNS resolution concurrent in Python on Mac and BSD. No, no, you do not want to hear that story, my friends. It is nothing but old lore and #ifdefs. But you made Python more scalable. The saga of Steve Jobs was sung to you by a mysterious wizard with a fanciful nickname! Tell us! Gather round, then. I will tell you how I unearthed a lost secret, unbound Python from old shackles, and banished an ancient and horrible Mutex Troll. Let us begin at the beginning.“ Is your interest piqued? It should be. I’m not sure we could do this blog post justice trying to read it aloud here, but definetly recommend if you want to see how he managed to get this bit of code working cross platform. (And it’s highly entertaining as well) “A long time ago, in the 1980s, a coven of Berkeley sorcerers crafted an operating system. They named it after themselves: the Berkeley Software Distribution, or BSD. For generations they nurtured it, growing it and adding features. One night, they conjured a powerful function that could resolve hostnames to IPv4 or IPv6 addresses. It was called getaddrinfo. The function was mighty, but in years to come it would grow dangerous, for the sorcerers had not made getaddrinfo thread-safe.” “As ages passed, BSD spawned many offspring. There were FreeBSD, OpenBSD, NetBSD, and in time, Mac OS X. Each made its copy of getaddrinfo thread safe, at different times and different ways. Some operating systems retained scribes who recorded these events in the annals. Some did not.” The story continues as our hero battles the Mutex Troll and quests for ancient knowledge “Apple engineers are not like you and me — they are a shy and secretive folk. They publish only what code they must from Darwin. Their comings and goings are recorded in no bug tracker, their works in no changelog. To learn their secrets, one must delve deep.” “There is a tiny coven of NYC BSD users who meet at the tavern called Stone Creek, near my dwelling. They are aged and fierce, but I made the Sign of the Trident and supplicated them humbly for advice, and they were kindly to me.” Spoiler: “Without a word, the mercenary troll shouldered its axe and trudged off in search of other patrons on other platforms. Never again would it hold hostage the worthy smiths forging Python code on BSD.” *** Using release(7) to create FreeBSD images for OpenStack (https://diegocasati.com/2016/12/13/using-release7-to-create-freebsd-images-for-openstack-yes-you-can-do-it/) Following a recent episode where we covered a walk through on how to create FreeBSD guest OpenStack images, we wondered if it would be possible to integrate this process into the FreeBSD release(7) process, so they images could be generated consistently and automatically Being the awesome audience that you are, one of you responded by doing exactly that “During a recent BSDNow podcast, Allan and Kris mentioned that it would be nice to have a tutorial on how to create a FreeBSD image for OpenStack using the official release(7) tools. With that, it came to me that: #1 I do have access to an OpenStack environment and #2 I am interested in having FreeBSD as a guest image in my environment. Looks like I was up for the challenge.” “Previously, I’ve had success running FreeBSD 11.0-RELEASE on OpenStack but more could/should be done. For instance, as suggested by Allan, wouldn’t be nice to deploy the latest code from FreeBSD ? Running -STABLE or even -CURRENT ? Yes, it would. Also, wouldn’t it be nice to customize these images for a specific need? I’d say ‘Yes’ for that as well.” “After some research I found that the current openstack.conf file, located at /usr/src/release/tools/ could use some extra tweaks to get where I wanted. I’ve created and attached that to a bugzilla on the same topic. You can read about that here (https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213396).” Steps: Fetch the FreeBSD source code and extract it under /usr/src Once the code is in place, follow the regular process of build(7) and perform a make buildworld buildkernel Change into the release directory (/usr/src/release) and perform a make cloudware make cloudware-release WITH_CLOUDWARE=yes CLOUDWARE=OPENSTACK VMIMAGE=2G “That’s it! This will generate a qcow2 image with 1.4G in size and a raw image of 2G. The entire process uses the release(7) toolchain to generate the image and should work with newer versions of FreeBSD.” + The patch has already been committed to FreeBSD (https://svnweb.freebsd.org/base?view=revision&revision=310047) Interview - Rod Grimes - rgrimes@freebsd.org (mailto:rgrimes@freebsd.org) Want to help fund the development of GPU Passthru? Visit bhyve.org (http://bhyve.org/) *** News Roundup Configuring the FreeBSD automounter (http://blog.khubla.com/freebsd/configuring-the-freebsd-automounter) Ever had to configure the FreeBSD auto-mounting daemon? Today we have a blog post that walks us through a few of the configuration knobs you have at your disposal. First up, Tom shows us his /etc/fstab file, and the various UFS partitions he has setup with the ‘noauto’ flag so they are not mounted at system boot. His amd.conf file is pretty basic, with just options enabled to restart mounts, and unmount on exit. Where most users will most likely want to pay attention is in the crafting of an amd.map file Within this file, we have the various command-foo which performs mounts and unmounts of targeted disks / file-systems on demand. Pay special attention to all the special chars, since those all matter and a stray or missing ; could be a source of failure. Lastly a few knobs in rc.conf will enable the various services and a reboot should confirm the functionality. *** l2k16 hackathon report: LibreSSL manuals now in mdoc(7) (http://undeadly.org/cgi?action=article&sid=20161114174451) Hackathon report by Ingo Schwarze “Back in the spring two years ago, Kristaps Dzonsons started the pod2mdoc(1) conversion utility, and less than a month later, the LibreSSL project began. During the general summer hackathon in the same year, g2k14, Anthony Bentley started using pod2mdoc(1) for converting LibreSSL manuals to mdoc(7).” “Back then, doing so still was a pain, because pod2mdoc(1) was still full of bugs and had gaping holes in functionality. For example, Anthony was forced to basically translate the SYNOPSIS sections by hand, and to fix up .Fn and .Xr in the body by hand as well. All the same, he speedily finished all of libssl, and in the autumn of the same year, he mustered the courage to commit his work.” “Near the end of the following winter, i improved the pod2mdoc(1) tool to actually become convenient in practice and started work on libcrypto, converting about 50 out of the about 190 manuals. Max Fillinger also helped a bit, converting a handful of pages, but i fear i tarried too much checking and committing his work, so he quickly gave up on the task. After that, almost nothing happened for a full year.” “Now i was finally fed up with the messy situation and decided to put an end to it. So i went to Toulouse and finished the conversion of the remaining 130 manual pages in libcrypto, such that you can now view the documentation of all functions” Interactive Terminal Utility: smenu (https://github.com/p-gen/smenu) Ok, I’ve made no secret of my love for shell scripting. Well today we have a new (somewhat new to us) tool to bring your way. Have you ever needed to deal with large lists of data, perhaps as the result of a long specially crafted pipe? What if you need to select a specific value from a range and then continue processing? Enter ‘smenu’ which can help make your scripting life easier. “smenu is a selection filter just like sed is an editing filter. This simple tool reads words from the standard input, presents them in a cool interactive window after the current line on the terminal and writes the selected word, if any, on the standard output. After having unsuccessfully searched the NET for what I wanted, I decided to try to write my own. I have tried hard to made its usage as simple as possible. It should work, even when using an old vt100 terminal and is UTF-8 aware.“ What this means, is in your interactive scripts, you can much easier present the user with a cursor driven menu to select from a range of possible choices. (Without needing to craft a bunch of dialog flags) Take a look, and hopefully you’ll be able to find creative uses for your shell scripts in the future. *** Ubuntu still isn't free software (http://mjg59.dreamwidth.org/45939.html) “Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries. This does not affect your rights under any open source licence applicable to any of the components of Ubuntu. If you need us to approve, certify or provide modified versions for redistribution you will require a licence agreement from Canonical, for which you may be required to pay. For further information, please contact us” “Mark Shuttleworth just blogged (http://insights.ubuntu.com/2016/12/01/taking-a-stand-against-unstable-risky-unofficial-ubuntu-images/) about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don't work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.” “The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn't, that's probably an infringement of the trademark and it's entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical's IP policy goes much further than that - it can be interpreted as meaning[1] that you can't distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu. [1]: And by "interpreted as meaning" I mean that's what it says and Canonical refuse to say otherwise” “If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn't infringe trademark law), and they say no or insist you need an additional contract, it's not free software. If they insist that you recompile source code before you can give copies to someone else, it's not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can't use their trademarks in non-infringing ways, that's still not free software.” “Canonical's IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.” Beastie Bits OPNsense 16.7.10 released (https://opnsense.org/opnsense-16-7-10-released/) OpenBSD Foundation Welcomes First Iridium Donor: Smartisan (http://undeadly.org/cgi?action=article&sid=20161123193708&mode=expanded&count=8) Jan Koum donates $500,000 to FreeBSD (https://www.freebsdfoundation.org/blog/foundation-announces-new-uranium-donor/) The Soviet Russia, BSD makes you (https://en.wikipedia.org/wiki/DEMOS) Feedback/Questions Jason - Value (http://pastebin.com/gRN4Lzy8) Hamza - Shell Scripting (http://pastebin.com/GZYjRmSR) Blog link (http://aikchar.me/blog/unix-shell-programming-lessons-learned.html) Dave - Migrating to FreeBSD (http://pastebin.com/hEBu3Drp) Dan - Which BSD? (http://pastebin.com/1HpKqCSt) Zach - AMD Video (http://pastebin.com/4Aj5ebns) ***

171: The APU - BSD Style!

December 07, 2016 1:27:13 62.8 MB Downloads: 0

Today on the show, we’ve got a look at running OpenBSD on a APU, some BSD in your Android, managing your own FreeBSD cloud service with ansible and much more. Keep it turned on your place to B...SD! This episode was brought to you by Headlines OpenBSD on PC Engines APU2 (https://github.com/elad/openbsd-apu2) A detailed walkthrough of building an OpenBSD firewall on a PC Engines APU2 It starts with a breakdown of the parts that were purchases, totally around $200 Then the reader is walked through configuring the serial console, flashing the ROM, and updating the BIOS The next step is actually creating a custom OpenBSD install image, and pre-configuring its serial console. Starting with OpenBSD 6.0, this step is done automatically by the installer Installation: Power off the APU2 Insert the bootable OpenBSD installer USB flash drive to one of the USB slots on the APU2 Power on the APU2, press F10 to get to the boot menu, and choose to boot from USB (usually option number 1) At the boot> prompt, remember the serial console settings (see above) Also at the boot> prompt, press Enter to start the installer Follow the installation instructions The driver used for wireless networking is athn(4). It might not work properly out of the box. Once OpenBSD is installed, run fw_update with no arguments. It will figure out which firmware updates are required and will download and install them. When it finishes, reboot. Where the rubber meets the road… (part one) (https://functionallyparanoid.com/2016/11/29/where-the-rubber-meets-the-road-part-one/) A user describes their adventures installing OpenBSD and Arch Linux on a new Lenovo X1 Carbon (4th gen, skylake) They also detail why they moved away from their beloved Macbook, which while long, does describe a journey away from Apple that we’ve heard elsewhere. The journey begins with getting a new Windows laptop, shrinking the partition and creating space for a triple-boot install, of Windows / Arch / OpenBSD Brian then details how he setup the partitioning and performed the initial Arch installation, getting it tuned to his specifications. Next up was OpenBSD though, and that went sideways initially due to a new NVMe drive that wasn’t fully supported (yet) The article is split into two parts (we will bring you the next installment at a future date), but he leaves us with the plan of attack to build a custom OpenBSD kernel with corrected PCI device identifiers. We wish Brian luck, and look forward to the “rest of the story” soon. *** Howto setup a FreeBSD jail server using iocage and ansible. (https://github.com/JoergFiedler/freebsd-ansible-demo) Setting up a FreeBSD jail server can be a daunting task. However when a guide comes along which shows you how to do that, including not exposing a single (non-jailed) port to the outside world, you know we had a take a closer look. This guide comes to us from GitHub, courtesy of Joerg Fielder. The project goals seem notable: Ansible playbook that creates a FreeBSD server which hosts multiple jails. Travis is used to run/test the playbook. No service on the host is exposed externally. All external connections terminate within a jail. Roles can be reused using Ansible Galaxy. Combine any of those roles to create FreeBSD server, which perfectly suits you. To get started, you’ll need a machine with Ansible, Vagrant and VirtualBox, and your credentials to AWS if you want it to automatically create / destroy EC2 instances. There’s already an impressive list of Anisible roles created for you to start with: freebsd-build-server - Creates a FreeBSD poudriere build server freebsd-jail-host - FreeBSD Jail host freebsd-jailed - Provides a jail freebsd-jailed-nginx - Provides a jailed nginx server freebsd-jailed-php-fpm - Creates a php-fpm pool and a ZFS dataset which is used as web root by php-fpm freebsd-jailed-sftp - Installs a SFTP server freebsd-jailed-sshd - Provides a jailed sshd server. freebsd-jailed-syslogd - Provides a jailed syslogd freebsd-jailed-btsync - Provides a jailed btsync instance server freebsd-jailed-joomla - Installs Joomla freebsd-jailed-mariadb - Provides a jailed MariaDB server freebsd-jailed-wordpress - Provides a jailed Wordpress server. Since the machines have to be customized before starting, he mentions that cloud-init is used to do the following: activate pf firewall add a pass all keep state rule to pf to keep track of connection states, which in turn allows you to reload the pf service without losing the connection install the following packages: sudo bash python27 allow passwordless sudo for user ec2-user “ From there it is pretty straight-forward, just a couple commands to spin up the VM’s either locally on your VirtualBox host, or in the cloud with AWS. Internally the VM’s are auto-configured with iocage to create jails, where all your actual services run. A neat project, check it out today if you want a shake-n-bake type cloud + jail solution. Colin Percival's bsdiff helps reduce Android apk bandwidth usage by 6 petabytes per day (http://android-developers.blogspot.ca/2016/12/saving-data-reducing-the-size-of-app-updates-by-65-percent.html) A post on the official Android-Developers blog, talks about how they used bsdiff (and bspatch) to reduce the size of Android application updates by 65% bsdiff was developed by FreeBSD’s Colin Percival Earlier this year, we announced that we started using the bsdiff algorithm (by Colin Percival). Using bsdiff, we were able to reduce the size of app updates on average by 47% compared to the full APK size. This post is actually about the second generation of the code. Today, we're excited to share a new approach that goes further — File-by-File patching. App Updates using File-by-File patching are, on average, 65% smaller than the full app, and in some cases more than 90% smaller. Android apps are packaged as APKs, which are ZIP files with special conventions. Most of the content within the ZIP files (and APKs) is compressed using a technology called Deflate. Deflate is really good at compressing data but it has a drawback: it makes identifying changes in the original (uncompressed) content really hard. Even a tiny change to the original content (like changing one word in a book) can make the compressed output of deflate look completely different. Describing the differences between the original content is easy, but describing the differences between the compressed content is so hard that it leads to inefficient patches. So in the second generation of the code, they use bsdiff on each individual file, then package that, rather than diffing the original and new archives bsdiff is used in a great many other places, including shrinking the updates for the Firefox and Chrome browsers You can find out more about bsdiff here: http://www.daemonology.net/bsdiff/ A far more sophisticated algorithm, which typically provides roughly 20% smaller patches, is described in my doctoral thesis (http://www.daemonology.net/papers/thesis.pdf). Considering the gains, it is interesting that no one has implemented Colin’s more sophisticated algorithm Colin had an interesting observation (https://twitter.com/cperciva/status/806426180379230208) last night: “I just realized that bandwidth savings due to bsdiff are now roughly equal to what the total internet traffic was when I wrote it in 2003.” *** News Roundup Distrowatch does an in-depth review of NAS4Free (https://distrowatch.com/weekly.php?issue=20161114#nas4free) Jesse Smith over at DistroWatch has done a pretty in-depth review of Nas4Free. The review starts with mentioning that NAS4Free works on 3 platforms, ARM/i386/AMD64 and for the purposes of this review he would be using AMD64 builds. After going through the initial install (doing typical disk management operations, such as GPT/MBR, etc) he was ready to begin using the product. One concern originally observed was that the initial boot seemed rather slow. Investigation revealed this was due to it loading the entire OS image into memory, and the first (long) disk read did take some time, but once loaded was super responsive. The next steps involved doing the initial configuration, which meant creating a new ZFS storage pool. After this process was done, he did find one puzzling UI option called “VM” which indicated it can be linked to VirtualBox in some way, but the Docs didn’t reveal its secrets of usage. Additionally covered were some of the various “Access” methods, including traditional UNIX permissions, AD and LDAP, and then various Sharing services which are typical to a NAS, Such as NFS / Samba and others. One neat feature was the built-in file-browser via the web-interface, which allows you another method of getting at your data when sometimes NFS / Samba or WebDav aren’t enough. Jesse gives us a nice round-up conclusion as well Most of the NAS operating systems I have used in the past were built around useful features. Some focused on making storage easy to set up and manage, others focused on services, such as making files available over multiple protocols or managing torrents. Some strive to be very easy to set up. NAS4Free does pretty well in each of the above categories. It may not be the easiest platform to set up, but it's probably a close second. It may not have the prettiest interface for managing settings, but it is quite easy to navigate. NAS4Free may not have the most add-on services and access protocols, but I suspect there are more than enough of both for most people. Where NAS4Free does better than most other solutions I have looked at is security. I don't think the project's website or documentation particularly focuses on security as a feature, but there are plenty of little security features that I liked. NAS4Free makes it very easy to lock the text console, which is good because we do not all keep our NAS boxes behind locked doors. The system is fairly easy to upgrade and appears to publish regular security updates in the form of new firmware. NAS4Free makes it fairly easy to set up user accounts, handle permissions and manage home directories. It's also pretty straight forward to switch from HTTP to HTTPS and to block people not on the local network from accessing the NAS's web interface. All in all, I like NAS4Free. It's a good, general purpose NAS operating system. While I did not feel the project did anything really amazing in any one category, nor did I run into any serious issues. The NAS ran as expected, was fairly straight forward to set up and easy to manage. This strikes me as an especially good platform for home or small business users who want an easy set up, some basic security and a solid collection of features. Browsix: Unix in the browser tab (https://browsix.org/) Browsix is a research project from the PLASMA lab at the University of Massachusetts, Amherst. The goal: Run C, C++, Go and Node.js programs as processes in browsers, including LaTeX, GNU Make, Go HTTP servers, and POSIX shell scripts. “Processes are built on top of Web Workers, letting applications run in parallel and spawn subprocesses. System calls include fork, spawn, exec, and wait.” Pipes are supported with pipe(2) enabling developers to compose processes into pipelines. Sockets include support for TCP socket servers and clients, making it possible to run applications like databases and HTTP servers together with their clients in the browser. Browsix comprises two core parts: A kernel written in TypeScript that makes core Unix features (including pipes, concurrent processes, signals, sockets, and a shared file system) available to web applications. Extended JavaScript runtimes for C, C++, Go, and Node.js that support running programs written in these languages as processes in the browser. This seems like an interesting project, although I am not sure how it would be used as more than a toy *** Book Review: PAM Mastery (https://www.cyberciti.biz/reviews/book-review-pam-mastery/) nixCraft does a book review of Michael W. Lucas’ “Pam Mastery” Linux, FreeBSD, and Unix-like systems are multi-user and need some way of authenticating individual users. Back in the old days, this was done in different ways. You need to change each Unix application to use different authentication scheme. Before PAM, if you wanted to use an SQL database to authenticate users, you had to write specific support for that into each of your applications. Same for LDAP, etc. So Open Group lead to the development of PAM for the Unix-like system. Today Linux, FreeBSD, MacOS X and many other Unix-like systems are configured to use a centralized authentication mechanism called Pluggable Authentication Modules (PAM). The book “PAM Mastery” deals with the black magic of PAM. Of course, each OS chose to implement PAM a little bit differently The book starts with the basic concepts about PAM and authentication. You learn about Multi-Factor Authentication and why use PAM instead of changing each program to authenticate the user. The author went into great details about why PAM is useful for developers and sysadmin for several reasons. The examples cover CentOS Linux (RHEL and clones), Debian Linux, and FreeBSD Unix system. I like the way the author described PAM Configuration Files and Common Modules that covers everyday scenarios for the sysadmin. PAM configuration file format and PAM Module Interfaces are discussed in easy to understand language. Control flags in PAM can be very confusing for new sysadmins. Modules can be stacked in a particular order, and the control flags determine how important the success or failure of a particular module. There is also a chapter about using one-time passwords (Google Authenticator) for your application. The final chapter is all about enforcing good password policies for users and apps using PAM. The sysadmin would find this book useful as it covers a common authentication scheme that can be used with a wide variety of applications on Unix. You will master PAM topics and take control over authentication for your organization IT infrastructure. If you are Linux or Unix sysadmin, I would highly recommend this book. Once again Michael W Lucas nailed it. The only book you may need for PAM deployment. get “PAM Mastery” (https://www.michaelwlucas.com/tools/pam) *** Reflections on Trusting Trust - Ken Thompson, co-author of UNIX (http://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html) Ken Thompson's "cc hack" - Presented in the journal, Communication of the ACM, Vol. 27, No. 8, August 1984, in a paper entitled "Reflections on Trusting Trust", Ken Thompson, co-author of UNIX, recounted a story of how he created a version of the C compiler that, when presented with the source code for the "login" program, would automatically compile in a backdoor to allow him entry to the system. This is only half the story, though. In order to hide this trojan horse, Ken also added to this version of "cc" the ability to recognize if it was recompiling itself to make sure that the newly compiled C compiler contained both the "login" backdoor, and the code to insert both trojans into a newly compiled C compiler. In this way, the source code for the C compiler would never show that these trojans existed. The article starts off by talking about a content to write a program that produces its own source code as output. Or rather, a C program, that writes a C program, that produces its own source code as output. The C compiler is written in C. What I am about to describe is one of many "chicken and egg" problems that arise when compilers are written in their own language. In this case, I will use a specific example from the C compiler. Suppose we wish to alter the C compiler to include the sequence "\v" to represent the vertical tab character. The extension to Figure 2 is obvious and is presented in Figure 3. We then recompile the C compiler, but we get a diagnostic. Obviously, since the binary version of the compiler does not know about "\v," the source is not legal C. We must "train" the compiler. After it "knows" what "\v" means, then our new change will become legal C. We look up on an ASCII chart that a vertical tab is decimal 11. We alter our source to look like Figure 4. Now the old compiler accepts the new source. We install the resulting binary as the new official C compiler and now we can write the portable version the way we had it in Figure 3. The actual bug I planted in the compiler would match code in the UNIX "login" command. The replacement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user. Such blatant code would not go undetected for long. Even the most casual perusal of the source of the C compiler would raise suspicions. Next “simply add a second Trojan horse to the one that already exists. The second pattern is aimed at the C compiler. The replacement code is a Stage I self-reproducing program that inserts both Trojan horses into the compiler. This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere. So now there is a trojan’d version of cc. If you compile a clean version of cc, using the bad cc, you will get a bad cc. If you use the bad cc to compile the login program, it will have a backdoor. The source code for both backdoors no longer exists on the system. You can audit the source code of cc and login all you want, they are trustworthy. The compiler you use to compile your new compiler, is the untrustworthy bit, but you have no way to know it is untrustworthy, and no way to make a new compiler, without using the bad compiler. The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect. Acknowledgment: I first read of the possibility of such a Trojan horse in an Air Force critique of the security of an early implementation of Multics. I can- not find a more specific reference to this document. I would appreciate it if anyone who can supply this reference would let me know. Beastie Bits Custom made Beastie Stockings (https://www.etsy.com/listing/496638945/freebsd-beastie-christmas-stocking) Migrating ZFS from mirrored pool to raidz1 pool (http://ximalas.info/2016/12/06/migrating-zfs-from-mirrored-pool-to-raidz1-pool/) OpenBSD and you (https://home.nuug.no/~peter/blug2016/) Watson.org FreeBSD and Linux cross reference (http://fxr.watson.org/) OpenGrok (http://bxr.su/) FreeBSD SA-16:37: libc (https://www.freebsd.org/security/advisories/FreeBSD-SA-16:37.libc.asc) -- A 26+ year old bug found in BSD’s libc, all BSDs likely affected -- A specially crafted argument can trigger a static buffer overflow in the library, with possibility to rewrite following static buffers that belong to other library functions. HardenedBSD issues correction for libc patch (https://github.com/HardenedBSD/hardenedBSD/commit/fb823297fbced336b6beeeb624e2dc65b67aa0eb) -- original patch improperly calculates how many bytes are remaining in the buffer. From December the 27th until the 30th there the 33rd Chaos Communication Congress[0] is going to take place in Hamburg, Germany. Think of it as the yearly gathering of the european hackerscene and their overseas friends. I am one of the persons organizing the "BSD assembly (https://events.ccc.de/congress/2016/wiki/Assembly:BSD)" as a gathering place for BSD enthusiasts and waving the flag amidst the all the other projects / communities. Feedback/Questions Chris - IPFW + Wifi (http://pastebin.com/WRiuW6nn) Jason - bhyve pci (http://pastebin.com/JgerqZZP) Al - pf errors (http://pastebin.com/3XY5MVca) Zach - Xorg settings (http://pastebin.com/Kty0qYXM) Bart - Wireless Support (http://pastebin.com/m3D81GBW) ***

170: Sandboxing Cohabitation

November 30, 2016 1:16:24 55.01 MB Downloads: 0

This week on the show, we’ve got some new info on the talks from EuroBSDCon, a look at sharing a single ZFS pool between Linux and BSD, Sandboxing and much more! Stay tuned for your place to B...SD! This episode was brought to you by Headlines EuroBSDcon 2016 Presentation Slides (https://2016.eurobsdcon.org/PresentationSlides/) Due to circumstances beyond the control of the organizers of EuroBSDCon, there were not recordings of the talks given at the event. However, they have collected the slide decks from each of the speakers and assembled them on this page for you Also, we have some stuff from MeetBSD already: Youtube Playlist (https://www.youtube.com/playlist?list=PLb87fdKUIo8TAMC2HJLZ7H54edD2BeGWv) Not all of the sessions are posted yet, but the rest should appear shortly MeetBSD 2016 Trip Report: Domagoj Stolfa (https://www.freebsdfoundation.org/blog/meetbsd-2016-trip-report-domagoj-stolfa/) *** Cohabiting FreeBSD and Gentoo Linux on a Common ZFS Volume (https://ericmccorkleblog.wordpress.com/2016/11/15/cohabiting-freebsd-and-gentoo-linux-on-a-common-zfs-volume/) Eric McCorkle, who has contributed ZFS support to the FreeBSD EFI boot-loader code has posted an in-depth look at how he’s setup dual-boot with FreeBSD and Gentoo on the same ZFS volume. He starts by giving us some background on how the layout is done. First up, GRUB is used as the boot-loader, allowing boot of both Linux and BSD The next non-typical thing was using /etc/fstab to manage mount-points, instead of the typical ‘zfs mount’ usage, (apart from /home datasets) data/home is mounted to /home, with all of its child datasets using the ZFS mountpoint system data/freebsd and its child datasets house the FreeBSD system, and all have their mountpoints set to legacy data/gentoo and its child datasets house the Gentoo system, and have their mountpoints set to legacy as well So, how did he set this up? He helpfully provides an overview of the steps: Use the FreeBSD installer to create the GPT and ZFS pool Install and configure FreeBSD, with the native FreeBSD boot loader Boot into FreeBSD, create the Gentoo Linux datasets, install GRUB Boot into the Gentoo Linux installer, install Gentoo Boot into Gentoo, finish any configuration tasks The rest of the article walks us through the individual commands that make up each of those steps, as well as how to craft a GRUB config file capable of booting both systems. Personally, since we are using EFI, I would have installed rEFInd, and chain-loaded each systems EFI boot code from there, allowing the use of the BSD loader, but to each their own! HardenedBSD introduces Safestack into base (https://hardenedbsd.org/article/shawn-webb/2016-11-27/introducing-safestack) HardenedBSD has integrated SafeStack into its base system and ports tree SafeStack (http://clang.llvm.org/docs/SafeStack.html) is part of the Code Pointer Integrity (CPI) project within clang. “SafeStack is an instrumentation pass that protects programs against attacks based on stack buffer overflows, without introducing any measurable performance overhead. It works by separating the program stack into two distinct regions: the safe stack and the unsafe stack. The safe stack stores return addresses, register spills, and local variables that are always accessed in a safe way, while the unsafe stack stores everything else. This separation ensures that buffer overflows on the unsafe stack cannot be used to overwrite anything on the safe stack.” “As of 28 November 2016, with clang 3.9.0, SafeStack only supports being applied to applications and not shared libraries. Multiple patches have been submitted to clang by third parties to add support for shared libraries.” SafeStack is only enabled on AMD64 *** pledge(2)… or, how I learned to love web application sandboxing (https://learnbchs.org/pledge.html) We’ve talked about OpenBSD’s sandboxing mechanism pledge() in the past, but today we have a great article by Kristaps Dzonsons, about how he grew to love it for Web Sandboxing. +First up, he gives us his opening argument that should make most of you sit up and listen: I use application-level sandboxing a lot because I make mistakes a lot; and when writing web applications, the price of making mistakes is very dear. In the early 2000s, that meant using systrace(4) on OpenBSD and NetBSD. Then it was seccomp(2) (followed by libseccomp(3)) on Linux. Then there was capsicum(4) on FreeBSD and sandbox_init(3) on Mac OS X. All of these systems are invoked differently; and for the most part, whenever it came time to interface with one of them, I longed for sweet release from the nightmare. Please, try reading seccomp(2). To the end. Aligning web application logic and security policy would require an arduous (and usually trial-and-error or worse, copy-and-paste) process. If there was any process at all — if the burden of writing a policy didn't cause me to abandon sandboxing at the start. And then there was pledge(2). This document is about pledge(2) and why you should use it and love it. “ +Not convinced yet? Maybe you should take his challenge: Let's play a drinking game. The challenge is to stay out of the hospital. 1.Navigate to seccomp(2). 2. Read it to the end. 3. Drink every time you don't understand. For capsicum(4), the challenge is no less difficult. To see these in action, navigate no further than OpenSSH, which interfaces with these sandboxes: sandbox-seccomp-filter.c or sandbox-capsicum.c. (For a history lesson, you can even see sandbox-systrace.c.) Keep in mind that these do little more than restrict resources to open descriptors and the usual necessities of memory, signals, timing, etc. Keep that in mind and be horrified. “ Now Kristaps has his theory on why these are so difficult (NS..), but perhaps there is a better way. He makes the case that pledge() sits right in that sweet-spot, being powerful enough to be useful, but easy enough to implement that developers might actually use it. All in all, a nice read, check it out! Would love to hear other developer success stories using pledge() as well. *** News Roundup Unix history repository, now on GitHub (http://www.osnews.com/story/29513/Unix_history_repository_now_on_GitHub) OS News has an interesting tidbit on their site today, about the entire commit history of Unix now being available online, starting all the way back in 1970 and bringing us forward to today. From the README The history and evolution of the Unix operating system is made available as a revision management repository, covering the period from its inception in 1970 as a 2.5 thousand line kernel and 26 commands, to 2016 as a widely-used 27 million line system. The 1.1GB repository contains about half a million commits and more than two thousand merges. The repository employs Git system for its storage and is hosted on GitHub. It has been created by synthesizing with custom software 24 snapshots of systems developed at Bell Labs, the University of California at Berkeley, and the 386BSD team, two legacy repositories, and the modern repository of the open source FreeBSD system. In total, about one thousand individual contributors are identified, the early ones through primary research. The data set can be used for empirical research in software engineering, information systems, and software archaeology. This is a fascinating find, especially will be of value to students and historians who wish to look back in time to see how UNIX evolved, and in this repo ultimately turned into modern FreeBSD. *** Yandex commits improvements to FreeBSD network stack (https://reviews.freebsd.org/D8526) “Rework ip_tryforward() to use FIB4 KPI.” This commit brings some code from the experimental routing branch into head As you can see from the graphs, it offers some sizable improvements in forwarding and firewalled packets per second commit (https://svnweb.freebsd.org/base?view=revision&revision=309257) *** The brief history of Unix socket multiplexing – select(2) system call (https://idea.popcount.org/2016-11-01-a-brief-history-of-select2/) Ever wondered about the details of socket multiplexing, aka the history of select(2)? Well Marek today gives a treat, with a quick look back at the history that made today’s modern multiplexing possible. First, his article starts the way all good ones do, presenting the problem in silent-movie form: In mid-1960's time sharing was still a recent invention. Compared to a previous paradigm - batch-processing - time sharing was truly revolutionary. It greatly reduced the time wasted between writing a program and getting its result. Batch-processing meant hours and hours of waiting often to only see a program error. See this film to better understand the problems of 1960's programmers: "The trials and tribulations of batch processing". Enter the wild world of the 1970’s, and we’ve now reached the birth of UNIX which tried to solve the batch processing problem with time-sharing. These days when a program was executed, it could "stall" (block) only on a couple of things1: + wait for CPU + wait for disk I/O + wait for user input (waiting for a shell command) or console (printing data too fast)“ Jump forward another dozen years or so, and the world changes yet again: This all changed in 1983 with the release of 4.2BSD. This revision introduced an early implementation of a TCP/IP stack and most importantly - the BSD Sockets API.Although today we take the BSD sockets API for granted, it wasn't obvious it was the right API. STREAMS were a competing API design on System V Revision 3. Coming in along with the sockets API was the select(2) call, which our very own Kirk McKusick gives us some background on: Select was introduced to allow applications to multiplex their I/O. Consider a simple application like a remote login. It has descriptors for reading from and writing to the terminal and a descriptor for the (bidirectional) socket. It needs to read from the terminal keyboard and write those characters to the socket. It also needs to read from the socket and write to the terminal. Reading from a descriptor that has nothing queued causes the application to block until data arrives. The application does not know whether to read from the terminal or the socket and if it guesses wrong will incorrectly block. So select was added to let it find out which descriptor had data ready to read. If neither, select blocks until data arrives on one descriptor and then awakens telling which descriptor has data to read. [...] Non-blocking was added at the same time as select. But using non-blocking when reading descriptors does not work well. Do you go into an infinite loop trying to read each of your input descriptors? If not, do you pause after each pass and if so for how long to remain responsive to input? Select is just far more efficient. Select also lets you create a single inetd daemon rather than having to have a separate daemon for every service. The article then wraps up with an interesting conclusion: > CSP = Communicating sequential processes In this discussion I was afraid to phrase the core question. Were Unix processes intended to be CSP-style processes? Are file descriptors a CSP-derived "channels"? Is select equivalent to ALT statement? I think: no. Even if there are design similarities, they are accidental. The file-descriptor abstractions were developed well before the original CSP paper. It seems that an operating socket API's evolved totally disconnected from the userspace CSP-alike programming paradigms. It's a pity though. It would be interesting to see an operating system coherent with the programming paradigms of the user land programs. A long (but good) read, and worth your time if you are interested in the history how modern multiplexing came to be. *** How to start CLion on FreeBSD? (https://intellij-support.jetbrains.com/hc/en-us/articles/206525024-How-to-start-CLion-on-FreeBSD) CLion (pronounced "sea lion") is a cross-platform C and C++ IDE By default, the Linux version comes bundled with some binaries, which obviously won’t work with the native FreeBSD build Rather than using Linux emulation, you can replace these components with native versions pkg install openjdk8 cmake gdb Edit clion-2016.3/bin/idea.properties and change run.processes.with.pty=false Start CLion and open Settings | Build, Execution, Deployment | Toolchains Specify CMake path: /usr/local/bin/cmake and GDB path: /usr/local/bin/gdb Without a replacement for fsnotifier, you will get a warning that the IDE may be slow to detect changes to files on disk But, someone has already written a version of fsnotifier that works on FreeBSD and OpenBSD fsnotifier for OpenBSD and FreeBSD (https://github.com/idea4bsd/fsnotifier) -- The fsnotifier is used by IntelliJ for detecting file changes. This version supports FreeBSD and OpenBSD via libinotify and is a replacement for the bundled Linux-only version coming with the IntelliJ IDEA Community Edition. *** Beastie Bits TrueOS Pico – FreeBSD ARM/RPi Thin Clients (https://www.trueos.org/trueos-pico/) A Puppet package provider for FreeBSD's PkgNG package manager. (https://github.com/xaque208/puppet-pkgng) Notes from November London *BSD meetup (http://mailman.uk.freebsd.org/pipermail/ukfreebsd/2016-November/014059.html) SemiBug meeting on Dec 20th (http://lists.nycbug.org/pipermail/semibug/2016-November/000131.html) Feedback/Questions Erno - SSH without password (http://pastebin.com/SMvxur9v) Jonathan - Magical ZFS (http://pastebin.com/5ETL7nmj) George - TrueOS (http://pastebin.com/tSVvaV9e) Mohammad - Jails IP (http://pastebin.com/T8nUexd1) Gibheer - BEs (http://pastebin.com/YssXXp70) ***

169: Scheduling your NetBSD

November 23, 2016 1:27:37 63.09 MB Downloads: 0

On today’s episode, we are loaded and ready to go. Lots of OpenBSD news, a look at LetsEncrypt usage, the NetBSD scheduler (oh my) and much more. Keep it tuned to your place to B...SD! This episode was brought to you by Headlines Production ready (http://www.tedunangst.com/flak/post/production-ready) Ted Unangst brings us a piece on what it means to be Production Ready He tells the story of a project he worked on that picked a framework that was “production ready” They tested time zones, and it all seemed to work They tested the unicode support in english and various european languages, and it was all good They sent some emails with it, and it just worked The framework said “Production Ready” on the tin, and it passed all the tests. What is the worst that could happen? Now, we built our product on top of this. Some of the bugs were caught internally. Others were discovered by customers, who were of course a little dismayed. Like, how could you possibly ship this? Indeed. We were doing testing, quite a bit really, but when every possible edge case has a bug, it’s hard to find them all. A customer from Arizona, which does not observe Daylight Saving Time, crashed the app Some less common unicode characters caused a buffer overflow The email system did not properly escape a period on its own line, truncating the email “Egregious performance because of a naive N^2 algorithm for growing a buffer.” “Egregious performance on some platforms due to using the wrong threading primitives.” “Bizarre database connection bugs for some queries that I can’t at all explain.” “In short, everything was “works for me” quality. But is that really production quality?” “There are some obvious contenders for the title of today’s most “production ready” software, but it’s a more general phenomenon. People who have success don’t know what they don’t know, what they didn’t test, what unused features will crash and burn.” Using Let's Encrypt within FreeBSD.org (https://blog.crashed.org/letsencrypt-in-freebsd-org/) I decided to give Let's Encrypt certificates a shot on my personal web servers earlier this year after a disaster with StartSSL. I'd like to share what I've learned. The biggest gotcha is that people tend to develop bad habits when they only have to deal with certificates once a year or so. The beginning part of the process is manual and the deployment of certificates somehow never quite gets automated, or things get left out. That all changes with Let's Encrypt certificates. Instead of 1-5 year lifetime certificates the Let's Encrypt certificates are only valid for 90 days. Most people will be wanting to renew every 60-80 days. This forces the issue - you really need to automate and make it robust. The Let's Encrypt folks provide tools to do this for you for the common cases. You run it on the actual machine, it manages the certificates and adjusts the server configuration files for you. Their goal is to provide a baseline shake-n-bake solution. I was not willing to give that level of control to a third party tool for my own servers - and it was absolutely out of the question for for the FreeBSD.org cluster. I should probably mention that we do things on the FreeBSD.org cluster that many people would find a bit strange. The biggest problem that we have to deal with is that the traditional model of a firewall/bastion between "us" and "them" does not apply. We design for the assumption that hostile users are already on the "inside" of the network. The cluster is spread over 8 distinct sites with naked internet and no vpn between them. There is actually very little trust between the systems in this network - eg: ssh is for people only - no headless users can ssh. There are no passwords. Sudo can't be used. The command and control systems use signing. We don't trust anything by IPv4/IPv6 address because we have to assume MITM is a thing. And so on. In general, things are constructed to be trigger / polling / pull based. The downside is that this makes automation and integration of Let's Encrypt clients interesting. If server configuration files can't be modified; and replicated web infrastructure is literally read-only (via jails/nullfs); and DNS zone files are static; and headless users can't ssh and therefore cannot do commits, how do you do the verification tokens in an automated fashion? Interesting, indeed. We wanted to be able to use certificates on things like ldap and smtp servers. You can't do http file verification on those so we had to use dns validation of domains. First, a signing request is generated, and the acme-challenge is returned Peter’s post then walks through how the script adds the required TXT record to prove control of the domain, regenerates the zone file, DNSSEC signs it, and waits for it to be published, then continues the letsencrypt process. Letsencrypt then issues the actual certificate We export the fullchain files into a publication location. There is another jail that can read the fullchain certificates via nullfs and they are published with our non-secrets update mechanism Since we are using DNSSEC, here is a good opportunity to maintain signed TLSA fingerprints. The catch with TLSA record updates is managing the update event horizon. You are supposed to have both fingerprints listed across the update cycle. We use 'TLSA 3 1 1' records to avoid issues with propagation delays for now. TLSA 3 0 1 changes with every renewal, while 3 1 1 only changes when you generate a new private key. The majority of TLS/SSL servers require a full restart to re-load the certificates if the filename is unchanged. I found out the hard way. There is a great deal more detail in the blog post, I recommend you check it out Learning more about the NetBSD scheduler (... than I wanted to know) Part 1 (http://www.feyrer.de/NetBSD/bx/blosxom.cgi/nb_20161105_1754.html) Part 2 (http://www.feyrer.de/NetBSD/bx/blosxom.cgi/nb_20161109_0059.html) Part 3 (http://www.feyrer.de/NetBSD/bx/blosxom.cgi/nb_20161113_0122.html) Today I had a need to do some number crunching using a home-brewn C program. In order to do some manual load balancing, I was firing up some Amazon AWS instances (which is Xen) with NetBSD 7.0. In this case, the system was assigned two CPUs I started two instances of my program, with the intent to have each one use one CPU. Which is not what happened! Here is what I observed, and how I fixed things for now. ~~ load averages: 2.14, 2.08, 1.83; up 0+00:45:56 18:01:32 27 processes: 4 runnable, 21 sleeping, 2 on CPU CPU0 states: 100% user, 0.0% nice, 0.0% system, 0.0% interrupt, 0.0% idle CPU1 states: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle Memory: 119M Act, 7940K Exec, 101M File, 3546M Free ~~ ~~ PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND 2791 root 25 0 8816K 964K RUN/0 16:10 54.20% 54.20% myprog 2845 root 26 0 8816K 964K RUN/0 17:10 47.90% 47.90% myprog ~~ I expected something like WCPU and CPU being around 100%, assuming that each process was bound to its own CPU. The values I actually saw (and listed above) suggested that both programs were fighting for the same CPU. Huh?! NetBSD allows to create "processor sets", assign CPU(s) to them and then assign processes to the processor sets. Let's have a look! ~~ # psrset -c 1 # psrset -b 0 2791 # psrset -b 1 2845 load averages: 2.02, 2.05, 1.94; up 0+00:59:32 18:15:08 27 processes: 1 runnable, 24 sleeping, 2 on CPU CPU0 states: 100% user, 0.0% nice, 0.0% system, 0.0% interrupt, 0.0% idle CPU1 states: 100% user, 0.0% nice, 0.0% system, 0.0% interrupt, 0.0% idle Memory: 119M Act, 7940K Exec, 101M File, 3546M Free PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND 2845 root 25 0 8816K 964K CPU/1 26:14 100% 100% myprog 2791 root 25 0 8816K 964K RUN/0 25:40 100% 100% myprog ~~ Things are as expected now, with each program being bound to its own CPU. Now why this didn't happen by default is left as an exercise to the reader. I had another look at this today, and was able to reproduce the behaviour using VMWare Fusion with two CPU cores on both NetBSD 7.0_STABLE as well as -current The one hint that I got so far was from Michael van Elst that there may be a rouding error in sched_balance(). Looking at the code, there is not much room for a rounding error. But I am not familiar enough (at all) with the code, so I cannot judge if crucial bits are dropped here, or how that function fits in the whole puzzle. Pondering on the "rounding error", I've setup both VMs with 4 CPUs, and the behaviour shown there is that load is distributed to about 3 and a half CPU - three CPUs under full load, and one not reaching 100%. There's definitely something fishy in there. With multiple CPUs, each CPU has a queue of processes that are either "on the CPU" (running) or waiting to be serviced (run) on that CPU. Those processes count as "migratable" in runqueue_t. Every now and then, the system checks all its run queues to see if a CPU is idle, and can thus "steal" (migrate) processes from a busy CPU. This is done in sched_balance(). Such "stealing" (migration) has the positive effect that the process doesn't have to wait for getting serviced on the CPU it's currently waiting on. On the other side, migrating the process has effects on CPU's data and instruction caches, so switching CPUs shouldn't be taken too easy. All in all, I'd say the patch is a good step forward from the current situation, which does not properly distribute pure CPU hogs, at all. Building Cost-Effective 100-Gbps Firewalls for HPC with FreeBSD (https://www.nas.nasa.gov/SC16/demos/demo9.html) The continuous growth of the NASA Center for Climate Simulation (NCCS) requires providing high-performance security tools and enhancing the network capacity. In order to support the requirements of emerging services, including the Advanced Data Analytics Platform (ADAPT) private cloud, the NCCS security team has proposed an architecture to provide extremely cost-effective 100-gigabit-per-second (Gbps) firewalls. The aim of this project is to create a commodity-based platform that can process enough packets per second (pps) to sustain a 100-Gbps workload within the NCCS computational environment. The test domain consists of several existing systems within the NCCS, including switches (Dell S4084), routers (Dell R530s), servers (Dell R420s, and C6100s), and host card adapters (10-Gbps Mellanox ConnectX2 and Intel 8259 x Ethernet cards). Previous NCCS work testing the FreeBSD operating system for high-performance routing reached a maximum of 4 million pps. Building on this work, we are comparing FreeBSD-11.0 and FreeBSD-Current along with implementing the netmap-fwd Application Programming Interface (API) and tuning the 10-gigabit Ethernet cards. We used the tools iperf3, nuttcp, and netperf to monitor the performance of the maximum bandwidth through the cards. Additional testing has involved enabling the Common Address Redundancy Protocol (CARP) to achieve an active/active architecture. The tests have shown that at the optimally tuned and configured FreeBSD system, it is possible to create a system that can manage the huge amounts of pps needed to create a 100-Gbps firewall with commodity components. Some interesting findings: FreeBSD was able to send more pps as a client than Centos 6. Netmap-fwd increased the pps rate significantly. The choice of network card can have a significant impact on pps, tuning, and netmap support. Further tests will continue verifying the above results with even more capable systems-such as 40-gigabit and 100-gigabit Ethernet cards-to achieve even higher performance. In addition to hardware improvements, updates to the network capabilities in the FreeBSD-Current version will be closely monitored and applied as appropriate. The final result will be a reference architecture with representative hardware and software that will enable the NCCS to build, deploy, and efficiently maintain extremely cost-effective 100-Gbps firewalls. Netflix has already managed to saturate a 100 Gbps interface using only a single CPU Socket (rather than a dual socket server). Forwarding/routing is a bit different, but it is definitely on track to get there. Using a small number of commodity servers to firewall 100 Gbps of traffic just takes some careful planning and load balancing. Soon it will be possible using a single host. News Roundup iocell - A FreeBSD jail manager. (https://github.com/bartekrutkowski/iocell) Another jail manager has arrived on the scene, iocell, which begins life as a fork of the “classic” iocage. Due to its shared heritage, it offers much of the same functionality and flags as iocage users will be familiar with. For those who aren’t up to speed with either products, some of those features include: Templates, clones, basejails, fully independent jails Ease of use Zero configuration files Rapid thin provisioning within seconds Automatic package installation Virtual networking stacks (vnet) Shared IP based jails (non vnet) Resource limits (CPU, MEMORY, DISK I/O, etc.) Filesystem quotas and reservations Dedicated ZFS datasets inside jails Transparent ZFS snapshot management Binary updates Differential jail packaging Export and import And many more! The program makes extensive use of ZFS for performing jail operations, so a zpool will be required (But doesn’t have to be your boot-pool) It still looks “very” fresh, even using original iocage filenames in the repo, so a safe guess is that you’ll be able to switch between iocage and iocell with relative ease. Fail2ban on OpenBSD 6.0 (http://blog.gordonturner.ca/2016/11/20/fail2ban-on-openbsd-6-0/) We’ve used Fail2Ban in PC-BSD before, due to it’s ability to detect and block brute force attempts against a variety of services, including SSH, mail, and others. It even can work to detect jail brute force attempts, blocking IPs on the hosts firewall. However what about OpenBSD users? Well, Gordon Turner comes to the rescue today with a great writeup on deploying Fail2Ban specifically for that platform. Now, Fail2Ban is a python program, so you’ll need to pkg install Python first, then he provides instructions on how to manually grab the F2B sources and install on OpenBSD. Helpfully Gordon gives us some handy links to scripts and modifications to get F2B running via RC as well, which is a bit different since F2B has both a server and client that must run together. With the installation bits out of the way, we get to next hit the “fun” stuff, which comes in the way of SSH brute force detection. Naturally we will be configuring F2B to use “pf” to do our actual blocking, but the examples shown give us full control over the knobs used to detect, and then ultimately call ‘pfctl’ to do our heavy lifting. The last bits of the article give us a runthrough on how to “prime” pf with the correct block tables and performing basic administrative tasks to control F2B in production. A great article, and if you run an OpenBSD box exposed to the internet, you may want to bookmark this one. openbsd changes of note (http://www.tedunangst.com/flak/post/openbsd-changes-of-note) Continuing with our OpenBSD news for the week, we have a new blog post by TedU, which gives us a bunch of notes on the things which have changed over there as of late: Some of the notables include: mcl2k2 pools and the em conversion. The details are in the commits, but the short story is that due to hardware limitations, a number of tradeoffs need to be made between performance and memory usage. The em chip can (mostly) only be programmed to write to 2k buffers. However, ethernet payloads are not nicely aligned. They’re two bytes off. Leading to a costly choice. Provide a 2k buffer, and then copy all the data after the fact, which is slow. Or allocate a larger than 2k buffer, and provide em with a pointer that’s 2 bytes offset. Previously, the next size up from 2k was 4k, which is quite wasteful. The new 2k2 buffer size still wastes a bit of memory, but much less. FreeType 2.7 is prettier than ever. vmm for i386. Improve security. vmm is still running with a phenomenal set of privileges, but perhaps some cross-VM attacks may be limited. On the other side of the world, hyperv support is getting better. Remove setlocale. setlocale was sprinkled all throughout the code base many years ago, even though it did nothing, in anticipation of a day when it would do something. We’ve since decided that day will never come, and so many setlocale calls can go. syspatch is coming. Lots of commits actually. Despite the name, it’s more like a system update, since it replaces entire binaries. Then again, replacing a few binaries in a system is like patching small parts of the whole. A syspatch update will be smaller than an entire release. There’s a new build system. It kind of works like before, but a lot of the details have changed to support less root. Actually, it’d be accurate to say the whole build privilege system has been flipped. Start as root, which drops down to the build user to do the heavy lifting, instead of starting as a user that can elevate to root at any time. This no longer requires the build user to be pseudo-root; in fact, the goal is that the build user can’t elevate. There’s several other items on this list, take a look for more details, and he also helpfully provides commit-links if you want to see more about any of these topics. It came from Bell Labs (http://media.bemyapp.com/came-bell-labs/#) A little late for a halloween episode, we have “It came from Bell Labs”, a fascinating article talking about the successor to UNIX, Plan9 There was once an operating system that was intended to be the successor to Unix. Plan 9 From Bell Labs was its name, and playing with it for five minutes is like visiting an alternate dimension where computers are done differently. It was so ahead of its time that it would be considered cutting edge, even today. Find out the weird and woolly history to Plan Nine’s inception and eventual consignment as a footnote of operating systems today. So, if you’ve never heard of Plan 9, how did it exactly differ from the UNIX we know and love today? Here’s just a few of the key features under Plan 9’s hood + 9P – The distributed file system protocol. Everything runs through this, there is no escaping it. Since everything runs on top of 9P, that makes everything running on a Plan 9 box distributed as well. This means, for example, you can import /dev/audio from another machine on the network to use its sound card when your own machine doesn’t have one. + ndb – The namespace server. In conjunction with 9P, it bosses all the programs around and forces them to comply to the Plan 9 way. + Instead of Unix sockets, all the networking just runs through 9P. Thus, everything from ethernet packets to network cards are all just one more kind of file. + While Unicode is implemented ad-hoc in other systems, it’s baked into Plan 9 from the first int main(). In fact, even users who don’t like Plan 9 have to admit that the character encoding support, together with the beautiful built-in rio font, makes every other operating system look primitive. + The system’s own internal programs are built to be a rounded set of user tools from the ground up. So, for instance, it comes with its own editor, acme, built to be its own weird morphing thing that plays nice with the 9P protocol. Sounds neat, but how did it work in the real world? The result was a mixture of both breathtaking efficiency and alienating other-worldliness. Trying out the system is like a visit to an alternate reality where time-traveling gremlins changed how computers are made and used. You can execute anycommand anywhere just by typing its name and middle-clicking on it, even in the middle of reading a file. You can type out your blog post in the middle of a man page and save it right there. Screenshots are made by pointing /dev/screen to a file. When you execute a program in a terminal, the terminal morphs into the program you launched instead of running in the background. The window manager, rio, can be invoked within rio to create an instance of itself running inside itself. You can just keep going like that, until, like Inception, you get lost in which layer you’re in. Get used to running Plan 9 long enough, and you will find yourself horribly ill-adapted for dealing with the normal world. While system administrators can’t stop praising it, the average home user won’t see much benefit unless they happen to run about eight desktop machines scattered all over. But to quote legendary hacker tribal bard Eric S. Raymond: “…Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor.” A fascinating article, worth your time to read it through, even though we’ve pulled some of the best bits here. Nice look at the alternative dimension that could have been. Beastie Bits inks -- Basically Reddit or Hacker News, but without the disagreeable trolls and military industrial complex shills downvoting everything to hide the truth (http://www.tedunangst.com/flak/post/inks) “PAM is Un-American” talk now online (https://youtu.be/Mc2p6sx2s7k) Reddit advertising of “PAM Mastery” (http://blather.michaelwlucas.com/archives/2818) MeetBSD 2016 Report by Michael Dexter (https://www.ixsystems.com/blog/meetbsd-2016-report-michael-dexter/) Various CBSD Tutorials (https://www.bsdstore.ru/en/tutorial.html) Feedback/Questions Dylan - Kaltura Alt (http://pastebin.com/6B96pVcm) Scott - ZFS in Low-Mem (http://pastebin.com/Hrp8qwkP) J - Mixing Ports / Pkgs (http://pastebin.com/85q4Q3Xx) Trenton - Dtract & PC-BSD (http://pastebin.com/RFKY0ERs) Ivan - ZFS Backups (http://pastebin.com/31uqW6vW) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)

168: The Post Show Show

November 16, 2016 1:24:11 60.62 MB Downloads: 0

This week on BSDNow. Allan and I are back from MeetBSD! A good time was had by all, lots to discuss, so let’s jump right into it on your place to B...SD! This episode was brought to you by Headlines Build a FreeBSD 11.0-release Openstack Image with bsd-cloudinit (https://raymii.org/s/tutorials/FreeBSD_11.0-release_Openstack_Image.html) We are going to prepare a FreeBSD image for Openstack deployment. We do this by creating a FreeBSD 11.0-RELEASE instance, installing it and converting it using bsd-cloudinit. We'll use the CloudVPS public Openstack cloud for this. Create an account there and install the Openstack command line tools, like nova, cinder and glance. A FreeBSD image with Cloud Init will automatically resize the disk to the size of the flavor and it will add your SSH key right at boot. You can use Cloud Config to execute a script at first boot, for example, to bootstrap your system into Puppet or Ansible. If you use Ansible to manage OpenStack instances you can integrate it without manually logging in or doing anything manually. Since FreeBSD 10.2-RELEASE there is an rc script which, when the file /firstboot exists, expands the root filesystem to the full disk. While bsd-cloudinit does this as well, if you don't need the whole cloudinit stack, (when you use a static ssh key for example), you can touch that file to make sure the disk is expanded at the first boot A detailed tutorial that shows how to create customized cloud images using the FreeBSD install media There is also the option of using the FreeBSD release tools to build custom cloud images in a more headless fashion Someone should make a tutorial out of that *** iXsystems Announces TrueOS Launch (https://www.ixsystems.com/blog/ixsystems-announces-trueos-launch/) As loyal listeners to this show, you’ve no doubt heard by now that we are in the middle of undergoing a shift in moving PC-BSD -> TrueOS. Last week during MeetBSD this was made “official” with iX issuing our press release and I was able to give a talk detailing many of the reasons and things going on with this change. The talk should be available online here soon(ish), but for a quick recap: TrueOS is moving to a rolling-release model based on FreeBSD -CURRENT Lumina has become the default desktop for TrueOS LibreSSL is enabled top to bottom We are in the middle of working on conversion to OpenRC for run-control replacement The TrueOS pico was announced, which is our “Thin-Client” solution, right now allowing you to use a TrueOS server pared with a RPI2 device. *** Running FreeBSD 11 on Raspberry Pi (https://vzaigrin.wordpress.com/2016/10/16/running-freebsd-11-on-raspberry-pi/) This article covers some of the changes you will notice if you upgrade your RPI to FreeBSD 11.0 It covers some of the changes to WiFi in 11.0 Pro Tip: you can get a list of WiFi devices by doing: sysctl net.wlan.devices There are official binary packages for ARM with 11.0, so you can just ‘pkg install’ your favourite apps Many of the LEDs are exposed via the /dev/led/ interface, which you can just echo 0 or 1 to, or use morse(6) to send a message gpioctl can be used to control the various GPIO pins The post also covers how to setup the real-time clock on the Raspberry Pi There is also limited support for adjusting the CPU frequency of the Pi There are also tips on configuring a one-wire temperature sensor *** void-zones-tools for FreeBSD (https://github.com/cyclaero/void-zones-tools) Adblock has been in the news a bit recently, with some of the more popular browser plugins now accepting brib^...contributions to permit specific ads through. Well today the ad-blockers strike back. We have a great tutorial up on GitHub which demonstrates one of the useful features of using Unbound in FreeBSD to do your own ad-blocking with void-zones. Specifically, void-zones are a way to return NXDOMAIN when DNS requests are made to known malicious or spam sites. Using void-zones-tools software will make managing this easy, by being able to pull in known lists of sites to block from several 3rd party curators. When coupled with our past tutorials on setting up your own FreeBSD router, this may become very useful for a lot of folks who want to do ad-blocking ad at a lower level, allowing it to filter smart-phones or any other devices on a network. *** News Roundup BSD Socket API Revamp (https://raw.githubusercontent.com/sustrik/dsock/master/rfc/sock-api-revamp-01.txt) Martin Sustrik has started a draft RFC to revamp the BSD Sockets API: The progress in the area of network protocols is distinctively lagging behind. While every hobbyist new to the art of programming writes and publishes their small JavaScript libraries, there's no such thing going on with network protocols. Indeed, it looks like the field of network protocols is dominated by big companies and academia, just like programming as a whole used to be before the advent of personal computers. the API proposed in this document doesn't try to virtualize all possible aspects of all possible protocols and provide a single set of functions to deal with all of them. Instead, it acknowledges how varied the protocol landscape is and how much the requirements for individual protocols differ. Therefore, it lets each protocol define its own API and asks only for bare minimum of standardised behaviour needed to implement protocol composability. As a consequence, the new API is much more lightweight and flexible than BSD socket API and allows to decompose today's monolithic protocol monsters into small single-purpose microprotocols that can be easily combined together to achieve desired functionality. The idea behind the new design is to allow the software author to define their own protocols via a generic interface, and easily stack them on top of the existing network protocols, be they the basic protocols like TCP/IP, or a layer 7 protocol like HTTP Example of creating a stack of four protocols: ~~ int s1 = tcpconnect("192.168.0.111:5555"); int s2 = foostart(s1, arg1, arg2, arg3); int s3 = barstart(s2); int s4 = bazstart(s3, arg4, arg5); ~~ It also allows applying generic transformations to the protocols: ~~ int tcps = tcpconnect("192.168.0.111:80"); /* Websockets is a connected protocol. */ int ws = websockconnect(tcps); uint16t compressionalgoritm; mrecv(ws, &compressionalgorithm, 2, -1); /* Compression socket is unconnected. */ int cs = compressstart(ws, compression_algorithm); ~~ *** Updated version of re(4) for DragonflyBSD (http://lists.dragonflybsd.org/pipermail/users/2016-November/313140.html) Sephe over at the Dragonfly project has issued a CFT for a newer version of the “re” driver For those who don’t know, that is for Realtek nics, specifically his updates add features: I have made an updated version of re(4), which leverages Realtek driver's chip/PHY reset/initialization code. I hope it can resolve all kinds of weirdness we encountered on this chip so far. Testers, you know what to do! Give this a whirl and let him know if you run into any new issues, or better yet, give feedback if it fixes some long-standing problems you’ve run into in the past. *** Hackathon reports from OpenBSD’s B2K16 b2k16 hackathon report: Jeremy Evans on ports cleaning, progress on postgres, nginx, ruby and more (http://undeadly.org/cgi?action=article&sid=20161112112023) b2k16 hackathon report: Landry Breuil on various ports progress (http://undeadly.org/cgi?action=article&sid=20161112095902) b2k16 hackathon report: Antoine Jacoutot on GNOME's path forward, various ports progress (http://undeadly.org/cgi?action=article&sid=20161109030623) We have a trio of hackathon reports from OpenBSD’s B2K16 (Recently held in Budapest) First up - Jeremy Evans give us his rundown which starts with sweeping some of the cruft out of the barn: I started off b2k16 by channeling tedu@, and removing a lot of ports, including lang/ruby/2.0, lang/io, convertors/ruby-json, databases/dbic++, databases/ruby-swift, databases/ruby-jdbc-*, x11/ruby-profiligacy, and mail/ruby-mailfactory. After that, he talks about improvements made to postgres, nginx and ruby ports, fixing things such as pg_upgrade support, breaking nginx down into sub-packages and a major ruby update to about 50% of the packages. Next up - Landry Breuil tells us about his trip, which also started with some major ports pruning, including some stale XFCE bits and drupal6. One of the things he mentions is the Tor browser: Found finally some time again to review properly the pending port for Tor Browser, even if i don't like the way it is developed (600+ patches against upstream firefox-esr !? even if relationship is improving..) nor will endorse its use, i feel that the time that was spent on porting it and updating it and maintaining it shouldn't be lost, and it should get commited - there are only some portswise minor tweaks to fix. Had a bit of discussions about that with other porters... Lastly, Antoine Jacoutot gives us a smaller update on his work: First task of this hackathon was for Jasper and I to upgrade to GNOME 3.22.1 (version 3.22.2 hit the ports tree since). As usual I already updated the core libraries a few days before so that we could start with a nice set of fully updated packages. It ended up being the fastest GNOME update ever, it all went very smoothly. We're still debating the future of GNOME on OpenBSD though. More and more features require systemd interfaces and without a replacement it may not make sense to keep it around. Implementing these interfaces requires time which Jasper and I don't really have these days... Anyway, we'll see. All-n-all, a good trip it sounds like with some much needed hacking taking place. Good to see the cruft getting cleaned up, along with some new exciting ports landing. *** July to September 2016 Status Report (https://www.freebsd.org/news/status/report-2016-07-2016-09.html) The latest FreeBSD quarterly status report is out It includes the induction of the new Core team, and reports from all of the other teams, including Release Engineering, Port Manager, and the FreeBSD Foundation Some other highlights: Capsicum Update The Graphics Stack on FreeBSD Using lld, the LLVM Linker, to Link FreeBSD VirtualBox Shared Folders Filesystem evdev support (better mouse, keyboard, and multi-touch support) ZFS Code Sync with Latest OpenZFS/Illumos The ARC now mostly stores compressed data, the same as is stored on disk, decompressing them on demand. The L2ARC now stores the same (compressed) data as the ARC without recompression, and its RAM usage was further reduced. The largest size of indirect block possible has been increased from 16KB to 128KB, and speculative prefetching of indirect blocks is now performed. Improved ordering of space allocation. The SHA-512t256 and Skein hashing algorithms are now supported. *** Beastie Bits How to Host Your Own Private GitHub with Gogs (http://www.cs.cmu.edu/afs/cs/user/predragp/www/git.html) Nvidia Adds Telemetry To Latest Drivers (https://yro.slashdot.org/story/16/11/07/1427257/nvidia-adds-telemetry-to-latest-drivers) KnoxBUG Upcoming Meeting (http://knoxbug.org/2016-11-29) Feedback/Questions William - Show Music (http://pastebin.com/skvEgkLK) Ray - Mounting a Cell Phone (http://pastebin.com/nMDeSFGM) Ron - TrueOS + Radeon (http://pastebin.com/p5bC1jKU) (Follow-up - He used nvidia card) Kurt - ZFS Migration (http://pastebin.com/ud9vEK2C) Matt Dillon (Yes that Matt Dillon) - vkernels (http://pastebin.com/VPQfsUks) ***

167: Playing the Long Game

November 09, 2016 47:47 34.41 MB Downloads: 0

This week on BSDNow, Allan & Kris are out at MeetBSD, but we never forget our loyal listeners. We have a great interview Allan did with Scott Long of Netflix & FreeBSD fame, as well as your questions on the place to B...SD! This episode was brought to you by Interview - Scott Long - scottl@freebsd.org (mailto:scottl@freebsd.org) FreeBSD & Netflix *** Feedback/Questions Zack - USB Config (http://pastebin.com/u77LE0Md) Jens - VMs, Jails and Containers (http://pastebin.com/8KwDK6ay) Ranko - Tarsnap Keys (http://pastebin.com/Kie3EcjN) Alex - OpenBSD in Hyper-V (http://pastebin.com/nRJQ7UPZ) Curt - Discussion Segment (http://pastebin.com/ndx25pQA)

166: Pass that UNIX Pipe

November 02, 2016 55:16 39.79 MB Downloads: 0

This week on the show, we’re loaded up with great stories ranging from System call fuzzing, a history of UNIX Pipes, speeding up MySQL imports and more. Stay tuned, BSDNow is coming your way right now. This episode was brought to you by Headlines System call fuzzing of OpenBSD amd64 using TriforceAFL (i.e. AFL and QEMU) (https://github.com/nccgroup/TriforceOpenBSDFuzzer) The NCCGroup did a series of fuzz testing against the OpenBSD syscall interface, during which they found a number of vulnerabilities, we covered this back in the early summer What we didn’t notice, is that they also made the tools they used available. A combination of AFL (American Fuzzy Lop), QEMU, OpenBSD’s FlashRD image generation tool, and the “Triforce” driver The other requirement is “a Linux box as host to run the fuzzer (other fuzzer hosts may work as well, we've only run TriforceAFL from a Linux host, specifically Debian/Ubuntu” It would be interesting to see if someone could get this to run from a BSD host It would also be interesting to run the same tests against the other BSDs *** On the Early History and Impact of Unix: the Introduction of Pipes (http://people.fas.harvard.edu/~lib113/reference/unix/unix2.html) Pipes are something we just take for granted today, but there was a time before pipes (How did anything get done?) Ronda Hauben writes up a great look back at the beginning of UNIX, and specifically at how pipes were born: One of the important developments in Unix was the introduction of pipes. Pipes had been suggested by McIlroy during the early days of creating Unix. Ritchie explains how "the idea, explained one afternoon on a blackboard, intrigued us but failed to ignite any immediate action. There were several objections to the idea as put....What a failure of imagination," he admits.(35) McIlroy concurs, describing how the initial effort to add pipes to Unix occurred about the same time in 1969 that Ritchie, Thompson and Canaday were outlining ideas for a file system. "That was when," he writes, "the simple pipeline as a way to combine programs, with data notationally propagating along a chain of (not necessarily concurrent) filters was articulated."(36) However, pipes weren't implemented in Unix until 1972. We also have a great quote from McIlroy on the day pipes were first introduced: Open Systems! Our Systems! How well those who were there remember the pipe-festooned garret where Unix took form. The excitement of creation drew people to work there amidst the whine of the computer's cool- ing fans, even though almost the same computer ac- cess, could be had from one's office or from home. Those raw quarters saw a procession of memorable events. The advent of software pipes precipitated a day-long orgy of one-liners....As people reveled in the power of functional composition in the large, which is even today unavailable to users of other systems. The paper goes on to talk about the invention of other important tools, such as “grep”, “diff” and more. Well worth your time if you want a glimpse into the history of UNIX *** Speeding up MySQL Import on FreeBSD (https://blog.feld.me/posts/2016/09/speeding-up-mysql-import-on-freebsd/) Mark Felder writes a blog post explaining how to speed up MySQL bulk data imports “I was recently tasked with rebuilding a readonly slave database server which only slaves a couple of the available databases. The backup/dump is straightforward and fast, but the restore was being excruciatingly slow. I didn't want to wait a week for this thing to finish, so I had to compile a list of optimizations that would speed up the process. This is the best way to do it on FreeBSD, assuming you're working with InnoDB. Additional optimizations may be required if you're using a different database engine.” “Please note this is assuming no other databases are running on this MySQL instance. Some of these are rather dangerous and you wouldn't want to put other live data at risk.” Most of the changes are meant to be temporary, used on a new server to import a dump of the database, then the settings are to be turned off. Specifically: sync_binlog = 0 innodbflushlogattrx_commit = 0 innodb-doublewrite = 0 He also prepends the following but of SQL before importing the data: set sqllogbin=0; set autocommit=0; set uniquechecks=0; set foreignkey_checks=0; You can also help yourself if your MySQL database lives on ZFS zfs set recordsize=16k pool/var/db/mysql zfs set redundant_metadata=most pool/var/db/mysql Remember, this tuning is ONLY for the initial import, leaving these settings on long term risks losing 5-10 seconds of your data if the server reboots unexpectedly zfs set sync=disabled pool/var/db/mysql zfs set logbias=throughput pool/var/db/mysql *** PostgreSQL and FreeBSD Quick Start (https://cwharton.com/blog/2016/10/postgresql-and-freebsd-quick-start/) There’s lots of databases to choose from, but Postgres always has a special place on FreeBSD. Today we have a look at a ‘getting started’ guide for those taking the plunge and using it for the first time. Naturally getting started will look familiar to many, a couple simple “pkg” and “sysrc” commands later, and you’ll be set. After starting the service (With the “service” command) you’ll be ready to start setting up your postgres instance. Next up you’ll need to create your initial user/password combo, and a database with access granted to this particular user. If you plan to enable remote access to this DB server, you’ll need to make some adjustments to one of the .conf files, allowing other IP’s to connect. (If you are hosting something on the same system, this may not be needed) Now yous should be good to go! Enjoy using your brand new Postgres database. If this is your first rodeo, maybe start with something easy, like Apache or Nginx + Wordpress to try it out. *** News Roundup OpenBSD vmm hypervisor test drive (https://www.youtube.com/watch?v=KE_7E1pXy5c) As we asked for a week or two ago, someone has taken OpenBSD’s vmm for a test drive, and made a video of it The command line interface for vmm, vmctl, looks quite easy to use. It takes an approach much closer to some of the bhyve management frameworks, rather than bhyve’s rather confusing set of switches It also has a config file, the format of which looks very similar to what I designed for bhyveucl, and my first effort to integrate a config file into bhyve itself. The video also looks at accessing the console, configuring the networking, and doing an OpenBSD install in a fresh VM Currently vmm only supports running OpenBSD VMs *** FreeBSD Foundation October 2016 Update (https://www.freebsdfoundation.org/wp-content/uploads/2016/10/FreeBSD-Foundation-October-2016-Update.pdf) Wow, November is already upon us with the Holidays just around the corner. Before things get lost in the noise we wanted to highlight this update from the FreeBSD foundation. Before getting into the stories, they helpfully provide a list of upcoming conferences for this fall/winter, which includes a couple of USENIX gatherings, and the Developer Summit / MeetBSD next week. +The foundation gives us a quick hardware update initially, discussing some of the new ThunderX Cavium servers which are deployed (ARMv8 64Bit) and yes I’m drooling a bit. They also mention that work is ongoing for the RPi3 platform and PINE64. GNN also has an article reprinted from the FreeBSD journal, talking about the achievement of making it to 11.0 over the span of 23 years now. Of course he mentions that the foundation is open to all, and welcomes donations to continue to keep up this tradition of good work being done. Deb Goodkin gives us an update on the “Grace Hopper” convention that took place in Houston TX several weeks back. Roughly 14k women in Tech attended, which is a great turnout, and FreeBSD was well represented there. Next we have a call to potential speakers, don’t forget that there are plenty of places you can help present about FreeBSD, not just at *BSD centered conferences, but the SCALES of the world as well. We wrap up with a look at EuroBSDCon 2016, quite a nice writeup, again brought to us by Deb at the foundation, and includes a list of some of those recognized for their contributions to FreeBSD. *** Adhokku – a toy PaaS powered by FreeBSD jails and Ansible (https://github.com/adhokku/adhokku) Described as a toy Platform-as-a-Service, Adhokku is an ansible based automated jail creation framework Based on the concept of Dokku, a single-host open source PaaS for Linux powered by Docker When you deploy an application using Adhokku, Adhokku creates a new jail on the remote host and provisions it from a fixed clean state using the instructions in the Jailfile in your Git repository. All jails sit behind a reverse proxy that directs traffic to one of them based on the domain name or the IP address in the HTTP request. When a new jail has been provisioned for an application, Adhokku seamlessly reconfigures the reverse proxy to send traffic to it instead of the one currently active for that application. The following instructions show how to get Adhokku and an example application running in a VM on your development machine using Vagrant. This process should require no FreeBSD-specific knowledge, through modifying the Jailfile to customize the application may. This seems like an interesting project, and it is good to see people developing workflows so users familiar with docker etc, can easily use BSD instead *** Installing OpenBSD 6.0 on your laptop is really hard (not) (http://sohcahtoa.org.uk/openbsd.html) OpenBSD on a laptop? Difficult? Not hardly. We have a great walkthrough by Keith Burnett, which demonstrates just how easy it can be to get up and running with an XFCE desktop from a fresh OpenBSD installation. For those curious,this was all done with a Thinkpad X60 and 120GB SSD and OpenBSD 6.0. He doesn’t really cover the install process itself, that is well covered by the link to the OpenBSD FAQ pages. Once the system is up and running though, we start with the most important portion, getting working internet access (Via wifi) Really just a few ‘ifconfig’ commands later and we are in business. Step 2 was getting the package configuration going. (I’ve never understood why this is still a thing, but no fret, its easy enough to do) With package repos available, now you can grab the binaries for XFCE and friends with just a few simple “pkg_add” commands Steps 4-6 are some specific bits to enable XFCE services, and some handy things such as setting doas permissions to get USB mounting working (For graphical mount/unmount) Lastly, keeping the system updated is important, so we have a nice tutorial on how to do that as well, using a handy “openup” script, which takes some of the guesswork out of it. Bonus! Steps for doing FDE as also included, which isn’t for everyone, but you may want it *** Beastie Bits Pi-top with RPi-3 and FreeBSD HEAD (https://twitter.com/gvnn3/status/791475373380804608) NetBSD 7.0.2 released (http://blog.netbsd.org/tnf/entry/netbsd_7_0_1_released1) DragonflyBSD - git: kernel - Fix mmcsd read/write issues (http://lists.dragonflybsd.org/pipermail/commits/2016-October/624851.html) A char device which implements an Enigma machine (FreeBSD & Linux) (https://github.com/rafael-santiago/dev-enigma) *** Feedback/Questions Matt - System Monitoring (http://pastebin.com/ayzvCuaq) Tony - LLVM License (http://pastebin.com/r5axPSE7) Ben - Thanks (http://pastebin.com/MNxCvUtX) David - Write Cache (http://pastebin.com/RswFASqW) Charles - Fonts (http://pastebin.com/e317a32f) ***

165: Vote4BSD

October 26, 2016 1:12:52 52.47 MB Downloads: 0

This week on BSDNow, we’ve got voting news for you (No not that election), a closer look at This episode was brought to you by Headlines ARIN 38 involvement, vote! (http://lists.nycbug.org/pipermail/talk/2016-October/016878.html) Isaac (.Ike) Levy, one of our interview guests from earlier this year, is running for a seat on the 15 person ARIN Advisory Council His goal is to represent the entire *BSD community at this important body that makes decisions about how IP addresses are allocated and managed Biographies and statements for all of the candidates are available here (https://www.arin.net/participate/elections/candidate_bios.pdf) The election ends Friday October 28th If elected, Ike will be looking for input from the community *** LibreSSL not just available but default (DragonFlyBSD) (https://www.dragonflydigest.com/2016/10/19/18794.html) DragonFly has become the latest BSD to join the growing LibreSSL family. As mentioned a few weeks back, they were in the process of wiring it up as a replacement for OpenSSL. With this latest commit, you can now build the entire base and OpenSSL isn’t built at all. Congrats, and hopefully more BSDs (and Linux) jump on the bandwagon Compat_43 is gone (http://lists.dragonflybsd.org/pipermail/commits/2016-October/624734.html) RiP 4.3 Compat support.. Well for DragonFly anyway. This commit finally puts out to pasture the 4.3 support, which has been disabled by default in DragonFly for almost 5 years now. This is a nice cleanup of their tree, removing more than a thousand lines of code and some of the old cruft still lingering from 4.3. *** Create your first FreeBSD kernel module (http://meltmes.kiloreux.me/create-your-first-freebsd-kernel-module/) This is an interesting tutorial from Abdelhadi Khiati, who is currently a master's student in AI and robotics I have been lucky enough to participate in Google Summer of Code with the FreeBSD foundation. I was amazed by the community surrounding it which was noob friendly and very helpful (Thank you FreeBSD <3) I wanted to make a starting tutorial for people to write a simple module for kernel before diving inside more complicated kernel shizzle The kernel module that we will be working on is a simple event handler for the kernel. It will be composed of 2 parts, the event handling function, and the module declaration The module event handler is a function that handles different events for the module. Like the module being loaded, unloaded or on system shutdown Now that we have the events handling function ready. We need to declare the moduledatat to be able to use it inside DECLAREMODULE macro and load it into the kernel. It has the module name and a pointer to the event handling function Lastly, we need to declare the module using the DECLARE_MODULE macro. Which has the following structure: ~~ DECLAREMODULE(name, moduledatat data, sub, order); ~~ name: The module name that will be used in the SYSINIT() call to identify the module. data: The moduledatat structure that we already presented. sub : Since we are using a driver here so the value will be SISUBDRIVERS this argument specify the type of system startup interface. order : Represents the order of initialization within the subsystem, we will us the SIORDER_MIDDLE value here. To compile the previous file you need to use a Makefile as following: ~~ KMOD=hello SRCS=module.c .include ~~ We look forward to a future post where more functionality is added to the kernel module Installing Windows 10 Under the bhyve Hypervisor. (http://pr1ntf.xyz/windows10.html) Looking for your Bhyve fix? If so, then Trent (Of iohyve fame) has a nice blog post today with a detailed look at how to get Windows 10 up and running in bhyve. First up, Trent gives us a nice look back at how far we’ve come in only a single year. Just a year ago, initial support for UEFI was landing, there was no VNC option, leaving us to only serial console goodness. Fast-forward to today and Windows 10 + Bhyve + Vnc is a go. He immediately jumps us into the good stuff, talking about what you’ll need to follow along. His tutorial was written on 12-CURRENT, but running 11.0-RELEASE should work as well. Of course, he does mention that before starting on this quest, make sure to read the bhyve handbook, specifically check that your CPU is supported. If you are running something without the correct Vt extensions, then your journey will end prematurely in sadness. Next up is some of the prep work needed to get your box ready to run VM’s. Loading the kernel module, creating “tap” devices for networking and such are detailed. If you are lazy (like me) then you’ll want to copy-n-paste his handy scripts which automate this process for you. With the system prepped, we get to the good stuff. You’ll need to install the bhyve-firmware package (which enables UEFI booting) and get your handy Windows 10 ISO. From here Trent has helpfully again provided us with handy scripts to both do the bhyve startup, as well as enabling VNC support over a SSH tunnel. At this point you are good to go, fire up your VNC client and you should be greeted with the typical Windows “Press any key to boot from CD” message. No, he doesn’t provide instructions on how to install / Use / Like Windows, but we’ll leave that up to you. *** News Roundup Lumina version 1.1.0 Released (https://lumina-desktop.org/version-1-1-0-released/) A new version of Lumina has just landed! 1.1.0 brings with it some important fixes, as well as new utilities that make your desktop computing easier than ever. First up, all i18n files have been re-worked, instead of needing to install another package, they are included with the build when WITH_I18N is set. A handy new “start-lumina-desktop” command has been added, which makes it easy to get lumina running from your Login Manager or even manually in .xinitrc or the like. A bunch of internals related to how it tracks installed Applications and start-menu entries has been re-worked, fixing some memory issues and speeding things up. The default “Insight” file-manager has been given an overhaul, which includes some new features like “git” support. A new Qt5 “lumina-calculator” has also joined the family, which means not needing to use kcalc or xcalc on TrueOS anymore. A nice “TrueOS” specific option has also landed. Specifically now when System Updates are waiting to install at shutdown, Lumina will detect and prompt if you want to install or skip the update. Handy when on the road, or if you don’t have the time to wait for an update to complete. *** OpenBGPD Large Communities support in –current (http://bad.network/openbgpd-large-communities.html) A blog post from OpenBSD’s Peter Hessler: On Friday, I committed support for Large Communities to OpenBGPD. This is a draft-RFC that I am pretty excited about. Back in the early days of The Internet, when routers rode dinosaurs to work and nerds weren't cool, we wanted to signal to our network neighbours certain information about routes. To be fair, we still do. But, back then everyone had 16 bit ASNs, so there was a simple concept called 'communities'. This was a 32bit opaque value, that was traditionally split into two 16bit values. Conveniently, we were able to encode an "us" and a "them", and perform actions based on what our neighbours told us. But, 16bits is pretty limiting. There could only be ~65'000 possible networks on The Internet total? Eeek. So, we created 32bit ASNs. 4 billion networks is seen as a quite reasonable limitation. However, you can't really encode a 32bit "us" and a 32bit "them" value into 32bits of total space. Something called "Extended Communities" was invented, but it tries to solve everything except the case of a 32bit ASN signalling to another 32bit ASN. Enter Large Communities. This is 3 32bit values. The first one is the "owner" of the namespace. Normally, you would put in your own ASN, or the ASN that you wish to signal. The second two 32bit values are opaque and only have meaning from the originating operator, but normally people will use "myasn":"verb":"noun" Or "myasn":"noun":"verb". Either way, it fits very nicely. Having previously ran a 32bit ASN, it became quickly obvious the lack of suitable communities was a critical problem. It was even the way to request an "old style" 16bit ASN from RIPE, "I need to use communities". Even the ability to say "do this to that ASN" was ugly, since you couldn't really communicate who the community was supposed to matter to. Clearly, we The Internet Community screwed up by not addressing this need earlier. OpenBGPD in OpenBSD -current has support for Large Communities, and this will be available in the 6.1 release and later. This was based partially on a patch from Job Snijders, thanks! First look at the renewed CTL High Availability implementation in FreeBSD (https://mezzantrop.files.wordpress.com/2016/10/first-look-at-the-renewed-ctl-high-availability-implementation-in-freebsd-v1-1.pdf) Following up on a previous post about making a high availability dual head storage controller, the new post looks at using FreeBSD’s CTL HA implementation, and FreeBSD 11.0 to do that: This enhancement looks extremely important for the BeaST storage system as implementation of high available native ALUA in FreeBSD can potentially replace the BeaST arbitration mechanism (“Arbitrator”), which is completely described in the papers on the BeaST project page ALUA in storage world terminology means Asymmetric Logical Unit Assignment. In simple words this set of technologies allows a host to access any LUN via both controllers of a storage system As I still do not have any real hardware drive-enclosures, we will use Oracle Virtual Box and iSCSI protocol. I have already deployed this environment for the BeaST development, so we can use the similar, yet more simplified template for the renewed CTL HA testing purpose. If anyone has access to hardware of this nature (a storage shelf with 2 heads connected to it), that they could lend the author to help validate the design on real hardware, that would be most helpful. > We will run two storage controllers (ctrl-a, ctrl-b) and a host (cln-1). A virtual SAS drive (da0) of 256 MB is configured as “shareable” in Virtual Media Manager and simultaneously connected with both storage controllers The basic settings are applied to both controllers One interesting setting is: kern.cam.ctl.harole – configures default role for the node. So ctrl-a is set as 0 (primary node), ctrl-b – 1 (secondary node). The role also can be specified on per-LUN basis which allows to distribute LUNs over both controllers evenly. Note, kern.cam.ctl.haid and kern.cam.ctl.ha_mode are read-only parameters and must be set only via the /boot/loader.conf file. Once kern.cam.ctl.ha_peer is set, and the peers connect to each other, the log messages should reflect this: CTL: HA link status changed from 0 to 1 CTL: HA link status changed from 1 to 2 The link states can be: 0 – not configured, 1 – configured but not established and 2 – established Then ctld is configured to export /dev/da0 on each of the controllers Then the client is booted, and uses iscsid to connect to each of the exposed targets sysctl kern.iscsi.failondisconnection=1 on the client is needed to drop connection with one of the controllers in case of its failure As we know that da0 and da1 on the client are the same drive, we can put them under multipathing control: gmultipath create -A HA /dev/da0 /dev/da1 The document them shows a file being copied continuously to simulate load. Because the multipath is configured in ‘active/active’ mode, the traffic is split between the two controllers Then the secondary controller is turned off, and iscsi disconnects that path, and gmultipath adapts and sends all of the traffic over the primary path. When the secondary node is brought back up, but the primary is taken down, traffic stops The console on the client is filled with errors: “Logical unit not accessible, asymmetric access state transition” The ctl(4) man page explains: > If there is no primary node (both nodes are secondary, or secondary node has no connection to primary one), secondary node(s) report Transitioning state. > Therefore, it looks like a “normal” behavior of CTL HA cluster in a case of disaster and loss of the primary node. It also means that a very lucky administrator can restore the failed primary controller before timeouts are elapsed. If the primary is down, the secondary needs to be promoted by some other process (CARP maybe?): sysctl kern.cam.ctl.ha_role=0 Then traffic follows again This is a very interesting look at this new feature, and I hope to see more about it in the future *** Is SPF Simply Too Hard for Application Developers? (http://bsdly.blogspot.com/2016/10/is-spf-simply-too-hard-for-application.html) Peter Hansteen asks an interesting question: The Sender Policy Framework (SPF) is unloved by some, because it conflicts with some long-established SMTP email use cases. But is it also just too hard to understand and to use correctly for application developers? He tells a story about trying to file his Norwegian taxes, and running into a bug Then in August 2016, I tried to report a bug via the contact form at Altinn.no, the main tax authorities web site. The report in itself was fairly trivial: The SMS alert I had just received about an invoice for taxes due contained one date, which turned out to be my birth date rather than the invoice due date. Not a major issue, but potentially confusing to the recipient until you actually log in and download the invoice as PDF and read the actual due date and other specifics. The next time I checked my mail at bsdly.net, I found this bounce: support@altinn.no: SMTP error from remote mail server after RCPT TO:: host mx.isp.as2116.net [193.75.104.7]: 550 5.7.23 SPF validation failed which means that somebody, somewhere tried to send a message to support@altinn.no, but the message could not be delivered because the sending machine did not match the published SPF data for the sender domain. What happened is actually quite clear even from the part quoted above: the host mx.isp.as2116.net [193.75.104.7] tried to deliver mail on my behalf (I received the bounce, remember), and since I have no agreement for mail delivery with the owners and operators of that host, it is not in bsdly.net's SPF record either, and the delivery fails. After having a bunch of other problems, he finally gets a message back from the tax authority support staff: It looks like you have Sender Policy Framework (SPF) enabled on your mailserver, It is a known weakness of our contact form that mailervers with SPF are not supported. The obvious answer should be, as you will agree if you're still reading: The form's developer should place the user's email address in the Reply-To: field, and send the message as its own, valid local user. That would solve the problem. Yes, I'm well aware that SPF also breaks traditional forwarding of the type generally used by mailing lists and a few other use cases. Just how afraid should we be when those same developers come to do battle with the followup specifications such as DKIM and (shudder) the full DMARC specification? Beastie Bits Looking for a very part-time SysAdmin (https://lists.freebsd.org/pipermail/freebsd-jobs/2016-October/000930.html) If anyone wants to build the latest nodejs on OpenBSD... (https://twitter.com/qb1t/status/789610796380598272) IBM considers donating Power8 servers to OpenBSD (https://marc.info/?l=openbsd-misc&m=147680858507662&w=2) Install and configure DNS server in FreeBSD (https://galaxy.ansible.com/vbotka/freebsd-dns/) bhyve vulnerability in FreeBSD 11.0 (https://www.freebsd.org/security/advisories/FreeBSD-SA-16:32.bhyve.asc) Feedback/Questions Larry - Pkg Issue (http://pastebin.com/8hwDVQjL) Larry - Followup (http://pastebin.com/3nswwk90) Jason - TrueOS (http://pastebin.com/pjfYWdXs) Matias - ZFS HALP! (http://pastebin.com/2tAmR5Wz) Robroy - User/Group (http://pastebin.com/7vWvUr8K) ***

164: Virtualized COW / PI?

October 19, 2016 1:40:37 72.44 MB Downloads: 0

This week on the show, we’ve got all sorts of goodies to discuss. Starting with, vmm, vkernels, raspberry pi and much more! Some iX folks are visiting from out of This episode was brought to you by Headlines vmm enabled (http://undeadly.org/cgi?action=article&sid=20161012092516&mode=flat&count=15) VMM, the OpenBSD hypervisor, has been imported into current It has similar hardware requirements to bhyve, a Intel Nehalem or newer CPU with the hardware virtualization features enabled in the BIOS AMD support has not been started yet OpenBSD is the only supported guest It would be interesting to hear from viewers that have tried it, and hear how it does, and what still needs more work *** vkernels go COW (http://lists.dragonflybsd.org/pipermail/commits/2016-October/624675.html) The DragonflyBSD feature, vkernels, has gained a new Copy-On-Write functionality Disk images can now be mounted RO or RW, but changes will not be written back to the image file This allows multiple vkernels to share the same disk image “Note that when the vkernel operates on an image in this mode, modifications will eat up system memory and swap, so the user should be cognizant of the use-case. Still, the flexibility of being able to mount the image R+W should not be underestimated.” This is another feature we’d love to hear from viewers that have tried it out. *** Basic support for the RPI3 has landed in FreeBSD-CURRENT (https://wiki.freebsd.org/arm64/rpi3) The long awaited bits to allow FreeBSD to boot on the Raspberry Pi 3 have landed There is still a bit of work to be done, some of the as mentioned in Oleksandr’s blog post: Raspberry Pi support in HEAD (https://kernelnomicon.org/?p=690) “Raspberry Pi 3 limited support was committed to HEAD. Most of drivers should work with upstream dtb, RNG requires attention because callout mode seems to be broken and there is no IRQ in upstream device tree file. SMP is work in progress. There are some compatibility issue with VCHIQ driver due to some assumptions that are true only for ARM platform. “ This is exciting work. No HDMI support (yet), so if you plan on trying this out make sure you have your USB->Serial adapter cables ready to go. Full Instructions to get started with your RPI 3 can be found on the FreeBSD Wiki (https://wiki.freebsd.org/arm64/rpi3) Relatively soon, I imagine there will be a RaspBSD build for the RPI3 to make it easier to get started Eventually there will be official FreeBSD images as well *** OpenBSD switches softraid crypto from PKCS5 PBKDF2 to bcrypt PBKDF. (https://github.com/openbsd/src/commit/2ba69c71e92471fe05f305bfa35aeac543ebec1f) After the discussion a few weeks ago when a user wrote a tool to brute force their forgotten OpenBSD Full Disk Encryption password (from a password list of possible variations of their password), it was discovered that OpenBSD defaulted to using just 8192 iterations of PKCSv5 for the key derivation function with a SHA1-HMAC The number of iterations can be manually controlled by the user when creating the softraid volume By comparison, FreeBSDs GELI full disk encryption used a benchmark to pick a number of iterations that would take more than 2 seconds to complete, generally resulting in a number of iterations over 1 million on most modern hardware. The algorithm is based on a SHA512-HMAC However, inefficiency in the implementation of PKCSv5 in GELI resulted in the implementation being 50% slower than some other implementations, meaning the effective security was only about 1 second per attempt, rather than the intended 2 seconds. The improved PKCSv5 implementation is out for review currently. This commit to OpenBSD changes the default key derivation function to be based on bcrypt and a SHA512-HMAC instead. OpenBSD also now uses a benchmark to pick a number of of iterations that will take approximately 1 second per attempt “One weakness of PBKDF2 is that while its number of iterations can be adjusted to make it take an arbitrarily large amount of computing time, it can be implemented with a small circuit and very little RAM, which makes brute-force attacks using application-specific integrated circuits or graphics processing units relatively cheap. The bcrypt key derivation function requires a larger amount of RAM (but still not tunable separately, i. e. fixed for a given amount of CPU time) and is slightly stronger against such attacks, while the more modern scrypt key derivation function can use arbitrarily large amounts of memory and is therefore more resistant to ASIC and GPU attacks.” The upgrade to the bcrypt, which has proven to be quite resistant to cracking by GPUs is a significant enhancement to OpenBSDs encrypted softraid feature *** Interview - Josh Paetzel - email@email (mailto:email@email) / @bsdunix4ever (https://twitter.com/bsdunix4ever) MeetBSD ZFS Panel FreeNAS - graceful network reload Pxeboot *** News Roundup EC2's most dangerous feature (http://www.daemonology.net/blog/2016-10-09-EC2s-most-dangerous-feature.html) Colin Percival, FreeBSD’s unofficial EC2 maintainer, has published a blog post about “EC2's most dangerous feature” “As a FreeBSD developer — and someone who writes in C — I believe strongly in the idea of "tools, not policy". If you want to shoot yourself in the foot, I'll help you deliver the bullet to your foot as efficiently and reliably as possible. UNIX has always been built around the idea that systems administrators are better equipped to figure out what they want than the developers of the OS, and it's almost impossible to prevent foot-shooting without also limiting useful functionality. The most powerful tools are inevitably dangerous, and often the best solution is to simply ensure that they come with sufficient warning labels attached; but occasionally I see tools which not only lack important warning labels, but are also designed in a way which makes them far more dangerous than necessary. Such a case is IAM Roles for Amazon EC2.” “A review for readers unfamiliar with this feature: Amazon IAM (Identity and Access Management) is a service which allows for the creation of access credentials which are limited in scope; for example, you can have keys which can read objects from Amazon S3 but cannot write any objects. IAM Roles for EC2 are a mechanism for automatically creating such credentials and distributing them to EC2 instances; you specify a policy and launch an EC2 instance with that Role attached, and magic happens making time-limited credentials available via the EC2 instance metadata. This simplifies the task of creating and distributing credentials and is very convenient; I use it in my FreeBSD AMI Builder AMI, for example. Despite being convenient, there are two rather scary problems with this feature which severely limit the situations where I'd recommend using it.” “The first problem is one of configuration: The language used to specify IAM Policies is not sufficient to allow for EC2 instances to be properly limited in their powers. For example, suppose you want to allow EC2 instances to create, attach, detach, and delete Elastic Block Store volumes automatically — useful if you want to have filesystems automatically scaling up and down depending on the amount of data which they contain. The obvious way to do this is would be to "tag" the volumes belonging to an EC2 instance and provide a Role which can only act on volumes tagged to the instance where the Role was provided; while the second part of this (limiting actions to tagged volumes) seems to be possible, there is no way to require specific API call parameters on all permitted CreateVolume calls, as would be necessary to require that a tag is applied to any new volumes being created by the instance.” “As problematic as the configuration is, a far larger problem with IAM Roles for Amazon EC2 is access control — or, to be more precise, the lack thereof. As I mentioned earlier, IAM Role credentials are exposed to EC2 instances via the EC2 instance metadata system: In other words, they're available from http://169.254.169.254/. (I presume that the "EC2ws" HTTP server which responds is running in another Xen domain on the same physical hardware, but that implementation detail is unimportant.) This makes the credentials easy for programs to obtain... unfortunately, too easy for programs to obtain. UNIX is designed as a multi-user operating system, with multiple users and groups and permission flags and often even more sophisticated ACLs — but there are very few systems which control the ability to make outgoing HTTP requests. We write software which relies on privilege separation to reduce the likelihood that a bug will result in a full system compromise; but if a process which is running as user nobody and chrooted into /var/empty is still able to fetch AWS keys which can read every one of the objects you have stored in S3, do you really have any meaningful privilege separation? To borrow a phrase from Ted Unangst, the way that IAM Roles expose credentials to EC2 instances makes them a very effective exploit mitigation mitigation technique.” “To make it worse, exposing credentials — and other metadata, for that matter — via HTTP is completely unnecessary. EC2 runs on Xen, which already has a perfectly good key-value data store for conveying metadata between the host and guest instances. It would be absolutely trivial for Amazon to place EC2 metadata, including IAM credentials, into XenStore; and almost as trivial for EC2 instances to expose XenStore as a filesystem to which standard UNIX permissions could be applied, providing IAM Role credentials with the full range of access control functionality which UNIX affords to files stored on disk. Of course, there is a lot of code out there which relies on fetching EC2 instance metadata over HTTP, and trivial or not it would still take time to write code for pushing EC2 metadata into XenStore and exposing it via a filesystem inside instances; so even if someone at AWS reads this blog post and immediately says "hey, we should fix this", I'm sure we'll be stuck with the problems in IAM Roles for years to come.” “So consider this a warning label: IAM Roles for EC2 may seem like a gun which you can use to efficiently and reliably shoot yourself in the foot; but in fact it's more like a gun which is difficult to aim and might be fired by someone on the other side of the room snapping his fingers. Handle with care!” *** Open-source storage that doesn't suck? Our man tries to break TrueNAS (http://www.theregister.co.uk/2016/10/18/truenas_review/) The storage reviewer over at TheRegister got their hands on a TrueNAS and gave it a try “Data storage is difficult, and ZFS-based storage doubly so. There's a lot of money to be made if you can do storage right, so it's uncommon to see a storage company with an open-source model deliver storage that doesn't suck.” “To become TrueNAS, FreeNAS's code is feature-frozen and tested rigorously. Bleeding-edge development continues with FreeNAS, and FreeNAS comes with far fewer guarantees than does TrueNAS.” “iXsystems provided a Z20 hybrid storage array. The Z20 is a dual-controller, SAS-based, high-availability, hybrid storage array. The testing unit came with a 2x 10GbE NIC per controller and retails around US$24k. The unit shipped with 10x 300GB 10k RPM magnetic hard drives, an 8GB ZIL SSD and a 200GB L2ARC SSD. 50GiB of RAM was dedicated to the ARC by the system's autotune feature.” The review tests the performance of the TrueNAS, which they found acceptable for spinning rust, but they also tested the HA features While the look of the UI didn’t impress them, the functionality and built in help did “The UI contains truly excellent mouseover tooltips that provide detailed information and rationale for almost every setting. An experienced sysadmin will be able to navigate the TrueNAS UI with ease. An experienced storage admin who knows what all the terms mean won't have to refer to a wiki or the more traditional help manual, but the same can't be said for the uninitiated.” “After a lot of testing, I'd trust my data to the TrueNAS. I am convinced that it will ensure the availability of my data to within any reasonable test, and do so as a high availability solution. That's more than I can say for a lot of storage out there.” “iXsystems produce a storage array that is decent enough to entice away some existing users of the likes of EMC, NetApp, Dell or HP. Honestly, that's not something I thought possible going into this review. It's a nice surprise.” *** OpenBSD now officially on GitHub (https://github.com/openbsd) Got a couple of new OpenBSD items to bring to your attention today. First up, for those who didn’t know, OpenBSD development has (always?) taken place in CVS, similar to NetBSD and previously FreeBSD. However today, Git fans can rejoice, since there is now an “official” read-only github mirror of their sources for public consumption. Since this is read-only, I will assume (unless told otherwise) that pull-requests and whatnot aren’t taken. But this will come in handy for the “git-enabled” among us who need an easier way to checkout OpenBSD sources. There is also not yet a guarantee about the stability of the exporter. If you base a fork on the github branch, and something goes wrong with the exporter, the data may be reexported with different hashes, making it difficult to rebase your fork. How to install LibertyBSD or OpenBSD on a libreboot system (https://libreboot.org/docs/bsd/openbsd.html) For the second part of our OpenBSD stories, we have a pretty detailed document posted over at LibreBoot.org with details on how to boot-strap OpenBSD (Or LibertyBSD) using their open-source bios replacement. We’ve covered blog posts and other tidbits about this process in the past, but this seems to be the definitive version (so far) to reference. Some of the niceties include instructions on getting the USB image formatted not just on OpenBSD, but also FreeBSD, Linux and NetBSD. Instructions on how to boot without full-disk-encryption are provided, with a mention that so far Libreboot + Grub does not support FDE (yet). I would imagine somebody will need to port over the openBSD FDE crypto support to GRUB, as was done with GELI at some point. Lastly some instructions on how to configure grub, and troubleshoot if something goes wrong will help round-out this story. Give it a whirl, let us know if you run into issues. Editorial Aside - Personally I find the libreboot stuff fascinating. It really is one of the last areas that we don’t have full control of our systems with open-source. With the growth of EFI, it seems we rely on a closed-source binary / mini-OS of sorts just to boot our Open Source solutions, which needs to be addressed. Hats off to the LibreBoot folks for taking on this important challenge. *** FreeNAS 9.10 – LAGG & VLAN Overview (https://www.youtube.com/watch?v=wqSH_uQSArQ) A video tutorial on FreeNAS’s official YouTube Channel Covers the advanced networking features, Link Aggregation and VLANs Covers what the features do, and in the case of LAGG, how each of the modes work and when you might want to use it *** Beastie Bits Remote BSD Developer Position is up for grabs (https://www.cybercoders.com/bsd-developer-remote-job-305206) Isilon is hiring for a FreeBSD Security position (https://twitter.com/jeamland/status/785965716717441024) Google has ported the Networked real-time multi-player BSD game (https://github.com/google/web-bsd-hunt) A bunch of OpenBSD Tips (http://www.vincentdelft.be) The last OpenBSD 6.0 Limited Edition CD has sold (http://www.ebay.com/itm/-/332000602939) Dan spots George Neville-Neil on TV at the Airport (https://twitter.com/DLangille/status/788477000876892162) gnn on CNN (https://www.youtube.com/watch?v=h7zlxgtBA6o) SoloBSD releases v 6.0 built upon OpenBSD (http://solobsd.blogspot.com/2016/10/release-solobsd-60-openbsd-edition.html) Upcoming KnoxBug looks at PacBSD - Oct 25th (http://knoxbug.org/content/2016-10-25) Feedback/Questions Morgan - Ports and Packages (http://pastebin.com/Kr9ykKTu) Mat - ZFS Memory (http://pastebin.com/EwpTpp6D) Thomas - FreeBSD Path Length (http://pastebin.com/HYMPtfjz) Cy - OpenBSD and NetHogs (http://pastebin.com/vGxZHMWE) Lars - Editors (http://pastebin.com/5FMz116T) ***