Corey Bryant: OpenStack Queens for Ubuntu 16.04 LTS

Planet Ubuntu - Thu, 03/01/2018 - 14:31

Hi All,

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Queens on Ubuntu 16.04 LTS via the Ubuntu Cloud Archive. Details of the Queens release can be found at:

To get access to the Ubuntu Queens packages:

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive pocket for OpenStack Queens on Ubuntu 16.04 installations by running the following commands:

sudo add-apt-repository cloud-archive:queens
sudo apt update

The Ubuntu Cloud Archive for Queens includes updates for:

aodh, barbican, ceilometer, ceph (12.2.2), cinder, congress, designate, designate-dashboard, dpdk (17.11), glance, glusterfs (3.13.2), gnocchi, heat, heat-dashboard, horizon, ironic, keystone, libvirt (4.0.0), magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-taas, neutron-vpnaas, nova, nova-lxd, openstack-trove, openvswitch (2.9.0), panko, qemu (2.11), rabbitmq-server (3.6.10), sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar.

For a full list of packages and versions, please refer to [0].

Branch Package Builds

If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
sudo add-apt-repository ppa:openstack-ubuntu-testing/pike
sudo add-apt-repository ppa:openstack-ubuntu-testing/queens

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thanks to everyone who has contributed to OpenStack Queens, both upstream and downstream!

Have fun and see you in Rocky!

(on behalf of the Ubuntu OpenStack team)


Categories: Linux

Martin Albisetti: On well executed releases and remote teams

Planet Ubuntu - Wed, 02/28/2018 - 18:11

After some blood, sweat and tears, we finally brought Stacksmith into the world, yay!

It’s been a lengthy and intense process that started with putting together a team to be able to build the product in the first place, and taking Bitnami’s experience and some existing tooling to make the cloud more accessible to everyone. It’s been a good week.

However, I learnt something I didn’t quite grasp before: if you find really good people, focus on the right things, scope projects to an achievable goal and execute well, releases lack a certain explosion of emotions that are associated with big milestones. Compounded with the fact that the team that built the product are all working remotely, launch day was pretty much uneventful.
I’m very proud of what we’ve built, we did it with a lot of care and attention, we agonized over trade-offs during the development process, we did load testing to do some capacity planning, added metrics to get hints as to when the user experience would start to suffer, we did CI/CD from day one so deployments were well guarded against breaking changes and did not affect the user experience. We did enough but not too much. We rallied the whole company a few weeks before release to try and break the service, asked people who hadn’t used it before to go through the whole process and document each step, tried doing new and unexpected things with the product. The website was updated! The marketing messaging and material were discussed and tested, analysts were briefed, email campaigns were set up. All the basic checklists were completed. It’s uncommon to be able to align all the teams, timelines and incentives.
What I learned this week is that if you do, releases are naturally boring 

I’m not quite sure what to do with that, there’s a sense of pride when rationalizing it, but I can’t help but feel that it’s a bit unfair that if you do things well enough the intrinsic reward seems to diminish.

I guess what I’m saying is, good job, Bitnami team!

Categories: Linux

Sebastian Kügler: Connecting new screens

Planet Ubuntu - Wed, 02/28/2018 - 06:59

Plasma’s new screen layout selection dialogThis week, Dan Vratil and me have merged a new feature in KScreen, Plasma’s screen configuration tool. Up until now, when plugging in a new display (a monitor, beamer or TV, for example), Plasma would automatically extend the desktop area to include this screen. In many cases, this is expected behavior, but it’s not necessarily clear to the user what just happened. Perhaps the user would rather want the new screen on the other side of the current, clone the existing screen, switch over to it or perhaps not use it at all at this point.
The new behavior is to now pop up a selection on-screen display (OSD) on the primary screen or laptop panel allowing the user to pick the new configuration and thereby make it clear what’s happening. When the same display hardware is plugged in again at a later point, this configuration is remembered and applied again (no OSD is shown in that case).
Another change-set which we’re about to merge is to pop up the same selection dialog when the user presses the display button which can be found on many laptops. This has been nagging me for quite a while since the display button switched screen configuration but provided very little in the way of visual feedback to the user what’s happening, so it wasn’t very user-friendly. This new feature will be part of Plasma 5.13 to be released in June 2018.

Categories: Linux

Benjamin Mako Hill: XORcise

Planet Ubuntu - Tue, 02/27/2018 - 12:41

XORcise (ɛɡ.zɔʁ.siz) verb 1. To remove observations from a dataset if they satisfy one of two criteria, but not both. [e.g., After XORcising adults and citizens, only foreign children and adult citizens were left.]

Categories: Linux

Ubuntu Insights: Kernel Team summary: February 27, 2018

Planet Ubuntu - Tue, 02/27/2018 - 12:32
Development (18.04)

On the road to 18.04 we have a 4.15 based kernel in the Bionic repository.

Important upcoming dates:

16.04.4 Point Release - Mar 1 (~1 week away) Feature Freeze - Mar 1 (~1 week away) Beta 1 - Mar 8 (~2 weeks away) Final Beta - Apr 5 (~6 weeks away) Kernel Freeze - Apr 12 (~7 weeks away) Final Freeze - Apr 19 (~8 weeks away) Ubuntu 18.04 - Apr 26 (~9 weeks away)

Stable (Released & Supported)

The Ubuntu Kernel Team is happy to announce that we are resuming our
regular SRU cadence cycle. See the schedule below for the important
dates for the upcoming SRU cycle.

  • Next cycle: 09-Mar through 31-Mar 09-Mar Last day for kernel commits for this cycle. 12-Mar - 17-Mar Kernel prep week. 18-Mar - 30-Mar Bug verification & Regression testing. 02-Apr Release to -updates.
  • fwts 18.02.00 released
  • The current CVE status
  • If you would like to reach the kernel team, you can find us at the #ubuntu-kernel
    channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing
    list at:
Categories: Linux

Ubuntu Insights: Ubuntu Server development summary – 27 February 2018

Planet Ubuntu - Tue, 02/27/2018 - 12:24
Hello Ubuntu Server

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: Call for Testing – Chrony and Subiquity

With the release of Bionic quickly approaching the Server team woud like to send out a call for testing of two new features. The first is the new Ubuntu Server ISO based on Subiquity. Check out this blog post by Dustin on how to get the new ISO and an overview of the new process. The second is a request sent to the ubuntu-server mailing list to test Chrony.

  • cloud-init 18.1 released!
  • ds-identify: Fix searching for iso9660 OVF cdroms for vmware (LP: #1749980)
  • Documented chef example incorrectly represented apt source configuration for chef install
  • SUSE: Fix groups used for ownership of cloud-init.log (Robert Schweikert)
  • OVF: Fix VMware support for 64-bit platforms (Sankar Tanguturi)
  • Salt: configure grains in grains file rather than in minion config (Daniel Wallace)
  • Implement puppet 4 support (Romanos Skiadas)
  • docs: Experimental ZFS support
  • docs: Add HACKING.rst doc to readthedocs
  • vmtest: fix centos image sync
  • fix install failure exit code
  • New CI pipeline in place
  • New self-test command to run lint tests and unit tests
Bug Work and Triage Contact the Ubuntu Server team Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 6

Uploads Released to the Supported Releases

Total: 15

Uploads to the Development Release

Total: 26

Categories: Linux

Ubuntu Insights: Charming Discourse with the reactive framework

Planet Ubuntu - Tue, 02/27/2018 - 09:36

Recently the Canonical IS department was asked to deploy the Discourse forum software for a revamp of the Ubuntu Community site at Discourse is a modernization of classic forum/bulletin board software packages and is something IS has had an interest in for some time, so I was happy to help the Community team get this deployed. An initial, exploratory deployment was done to get a feel for the software and then the obvious next step for us towards production deployment was charming Discourse for easy, repeatable deployment in the future.

This process has some interesting challenges as an exercise in charming, Discourse is a Ruby on Rails application which Canonical IS doesn’t have substantial institutional experience with and with which I personally have essentially no experience. Luckily, writing charms with the reactive toolkit provides layers to abstract away much of the technology specific aspects of the process, allowing the charmer to focus on code directly related to the application they’re charming.

I’ll be building the charm with charms.reactive, the most current framework for writing charms, which provides a set of pre-built layers and interfaces to make interfacing with other services and charms quick and easy. The charm tool helpfully provides a set of defaults for working with charms.reactive. Both the charm tool and charms.reactive are amply documented elsewhere, so let’s skip to the code.

Note that some code is redacted for the sake of clarity, but the full charm is available at for anyone to view.

It happens that the Discourse code is distributed as a git tree and there’s a charm layer specifically for deploying from git repos, so the base of the install is quite simple:

@when('codebase.available') @when('ruby.available') @when_not('discourse.installed') def install_discourse(): config = {'hostname': hookenv.config('hostname')} config['smtp_address'] = "" # Required to work with a stock install postfix config['smtp_openssl_verify_mode'] = "none" config['smtp_domain'] = hookenv.config('hostname') # There's no config default for this, so only write it if it's set if hookenv.config('admin-users'): config['developer_emails'] = hookenv.config('admin-users') write_config(config) # Bundle command is from the ruby layer, it'll install our gem dependencies bundle('install') hookenv.status_set('blocked', 'Discourse installed, waiting on database.') set_state('discourse.installed')

The git-deploy layer sets the codebase.avilable state when it has successfully cloned the configured git repo from the remote source. Likewise, the ruby layer sets ruby.available when it’s ready, so we’re waiting on both those states. To prevent the initial configuration from running multiple times, we guard the function with a state we set at the end, discourse.installed, which we can also use to trigger additional reactions as we flow through the installation process. The actual code here is straightforward, we fetch some things from the juju configuration and write them out to the discourse.conf file with a utility function, then use the bundle command from the ruby layer to install gem dependencies.

We now have the code cloned locally and ruby dependencies installed, next we’ll need a database. Discourse needs a pair of postgres extensions installed and we might as well specify the database name, that way if we ever need to do multi-site or something else unusual, we can control that via a configuration options down the road.

@when_not('discourse.database.requested') @when('db.connected') def request_db(pgsql): pgsql.set_database('discourse') pgsql.set_extensions(['hstore', 'pg_trgm']) set_state('discourse.database.requested')

The postgresql layer provides the db.connected state, the name of the state is determined by the name of the relation connecting the two charms, since we chose to name the relation db on the Discourse side, the state is named db. Note that we pass in pgsql to this function, a class provided by the postgresql layer that let’s us manipulate the database and extract information about the connection to it. Again we’re guarding this function with a state set within it, so it only runs once.

Once the database and extensions are created, we can configure it.

@when_not('discourse.database.configured') @when('discourse.installed') @when('db.master.available') def db_available(pgsql): write_db_config(pgsql) set_state('discourse.database.configured')

That’s pretty uninteresting, discourse.installed is set by the install function so this will run after the install happens so we know we’re not writing a config file to an empty directory. The db.master.available is set by the postgres layer when the database is actual available rather than just when the postgresql relation has been connected, so this configuration will happen after database and extension creation.

Let’s look at the write_db_config utility function though.

def write_db_config(pgsql): config = ingest_config() db_config = pgsql.master config['db_name'] = db_config.dbname config['db_host'] = config['db_username'] = db_config.user config['db_password'] = db_config.password write_config(config)

The ingest_config is another utility function that reads the existing discourse.conf and turns it into a dictionary that we can pass around, thus preserving anything in it that may have been configured either by another part of the charm or outside of juju for some reason. The rest of this simply takes pieces of the database connection string provided by the pgsql class and adds it to the config dictionary with key names appropriate to the discourse.conf file, which we then write out, then set a state indicating we’ve configured Discourse’s database connection.

Now that we have a basic configuration that will actually do something, there’s some preparatory work unique to Discourse that needs to happen before we can get much further, so we’ll go ahead and prepare the code for running.

@when_not('discourse.codebase.prepared') @when('db-admin.master.available') @when('discourse.database.configured') @when('discourse.database.requested') def prepare_codebase(pgsql): # Create/update all the Discourse specific DB schema bundle('exec rake db:migrate RAILS_ENV=production') # Compile CSS/JS bundle('exec rake assets:precompile RAILS_ENV=production') if hookenv.config('plugins'): fetch_plugins() # The ruby layer doesn't have a concept of ownership, so after the rakes # have run chown everything["/bin/chown", "-R", "www-data:www-data", "/srv/discourse/current/"]) # Set some states to trigger server configuration off of set_state('discourse.configure.unicorn') set_state('discourse.configure.sidekiq') set_state('discourse.codebase.prepared')

We trigger on a guard state set within the reaction itself, as well as requiring that we have a database created, configured and available, putting together various states set in the previous functions. The bundle command, as mentioned earlier, is courtesy of the ruby layer and handles some of the under the hood complexity, allowing us to just run the commands we need, in this case migrating the database and compiling assets, steps familiar to anyone who has used any modern web framework. Unfortunately, the ruby layer doesn’t have an ownership concept the way the git-deploy layer does, so we need to make sure nothing ends up owned by root that would cause problems later. Then we set some states to trigger configuration of the app servers.

So now we have code, we have a database, we just need to configure and start up the application server. Discourse seems to support a wide variety of Rails application servers, I’ve seen people using Apache with Passenger, Thin, etc. The default in the Docker deploy is using Unicorn though, so we’ll go with that.

@when('discourse.configure.unicorn') def configure_unicorn(): if not os.path.isdir('/srv/discourse/current/tmp/pids'): os.makedirs('/srv/discourse/current/tmp/pids') render(source="unicorn.service", target="/lib/systemd/system/unicorn.service", perms=0o644, context={'config': hookenv.config}) service('enable', 'unicorn') service('restart', 'unicorn') open_port(3000, protocol='tcp') hookenv.status_set('active', 'Discourse running.') remove_state('discourse.configure.unicorn')

When the state is set at the end of prepare_codebase, we’ll trigger this reaction. We create a PID directory for the Unicorn server, then render out a templated systemd configuration, enable the server, then ensure that it’s started or restarted, since this reaction can be triggered not only at install time but also when the configuration changes. We don’t really need to open the port, that tends to be something that’s only necessary with public facing services, but it’ll facilitate testing and configuration of Discourse via the web UI when people install the charm, so it’s convenient. We set the juju workload status to active with a relevant comment, then remove the state that triggered the reaction.

Discourse also has an asynchronous application server that handles operations that shouldn’t block a web request like sending mail, occasional recompilation of markdown of posts, etc. There doesn’t appear to be the diversity of options for that server that exists for the main application server, Sidekiq seems to be the standard, so we do basically the same process for that.

Now we have a running application server and if we visit port 3000 of the IP of the unit, we’ll get the Discourse welcome page, but that’s not a convenient way to access a forum or even particularly convenient for basic setup. Luckily, the website layer exists which lets a charm provide the “website” relation that can be consumed by most frontend servers. In this case I’ll be using Apache, but it could just as easily be Nginx, Squid, Varnish, whatever your preferred frontend web server might be.

@when('website.available') def configure_website(website): hostname = hookenv.config('hostname') website.configure(port=3000, hostname=hostname)

That’s a remarkably concise reaction that has a lot abstracted away for us. The website.available state is set by the website interface layer when a relation is created between the Discourse charm and any other charm that can consume the website relation, so it doesn’t get executed until there’s a unit to consume it. When that happens we just provide the port Unicorn is listening on and the hostname configured in Discourse’s juju configuration, so the web server knows what its FQDN should be and we’re done, the layer handles everything else.

So now we have a running Discourse that we can access via the web server of our choice, but what about when we want to update the Discourse code? Or suppose we need to change the hostname or install a new plugin? How do we handle those situations?

@when('config.changed') def write_new_config(): config = ingest_config() config['hostname'] = hookenv.config('hostname') admin_users = hookenv.config('admin-users') if admin_users: config['developer_emails'] = admin_users write_config(config) # This will force the prepare_codebase reaction to run which will handle # any changes in the plugin list and then will in turn run the # configure_unicorn and configure_sidekiq reactions remove_state('discourse.codebase.prepared')

The config.changed state is provided by charms.reactive whenever a juju configuration value changes. Since the Discourse config is just key value pairs, it’s safe to write the various configuration settings to the discourse.conf whenever any of them change, so we don’t have to worry about determining which of the configuration options changed at any given time, though charms.reactive does include a facility to do that. That allows you to only restart a service when a configuration option that requires a restart to take effect is changed, for example, something important in more complex applications. In this case the simple route is fine, we write out the new config with all values, then trigger the prepare_codebase reaction again. If a new version of the code has been fetched or the plugin list has changed, that will run through all the steps necessary to update the application and restart the services to pick up the new code/plugin.

This isn’t the entirety of the charm, obviously, there’s some additional interfaces and layers involved to provide monitoring of the various application servers and required supporting services on the unit, redis and postfix in particular, as well as other bits and pieces of utility code that aren’t particularly illuminating in and of themselves. This does show the meat of the code that actually installs, configures and runs Discourse though.

So with all the pieces in place, we can go ahead and deploy a full stack:

juju deploy discourse juju deploy postgresql juju deploy apache2 juju add-relation discourse:db-admin postgresql:db-admin juju add-relation discourse:website apache2:balancer

There is a tiny bit of additional configuration to be done, Discourse needs a hostname and admin users, Apache needs vhost templates to configure its virtual hosts to use the balancer created by the website relation to serve the site, but the user experience of deploying and configuring Discourse via charm is very, very simple.

Hopefully this provides a view into the power the reactive framework gives charm authors to focus on the details of getting their application running with a minimum of boilerplate code and complexity. Overall, the Discourse charm ended up taking me a few days to develop, including some time setting up the manual test install, tracking down bugs and testing, then finally deploying into production. I was able to go from a very basic understanding of Rails applications in general and Discourse in particular to a deployed service leveraging other charms within a few days of work. Now anyone else who may want to provide a Discourse forum for their community has a robust method for deploying and scaling it with very little effort.

Categories: Linux

Ubuntu Insights: LXD weekly status #36

Planet Ubuntu - Mon, 02/26/2018 - 14:36


This past week we’ve been working very hard to land all those last few bits ahead of us tagging a number of 3.0.0.beta1 releases of all our repositories.

We’re now waiting for a few last bits to land, including LXD clustering and some reshuffling of templates, bindings and tools in LXC. The current plan is to start tagging a number of projects later today, tomorrow and Wednesday, with all of them making their way into Ubuntu by end of day on Thursday.

Note that all of those will be beta releases and so will not see our usual backporting effort at this point nor get full release announcements, we’ll keep all that for the final 3.0 release in about a month’s time.

For snap users, we expect to push all of this to the currently unused beta channel, allowing you to try the upcoming LXD 3.0 along with the matching LXC 3.0 and LXCFS 3.0.

Upcoming conferences and events Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD LXC LXCFS Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

  • lxd 2.21-0ubuntu4 was uploaded to cleanup some old binary packages.
  • Cherry-picked a large number of bugfixes.
  • Added a note to lxd.migrate on migrating the client configuration too.
Categories: Linux

Jo Shields: EOL notification – Debian 7, Ubuntu 12.04

Planet Ubuntu - Mon, 02/26/2018 - 11:31

Mono packages will no longer be built for these ancient distribution releases, starting from when we add Ubuntu 18.04 to the build matrix (likely early to mid April 2018).

Unless someone with a fat wallet screams, and throws a bunch of money at Azure, anyway.

Categories: Linux

Ubuntu Insights: Deploying Ubuntu OpenStack to ARM64 servers

Planet Ubuntu - Mon, 02/26/2018 - 10:15

This article originally appeared on Dann Frazier's blog


At Canonical, we’ve been doing work to make sure Ubuntu OpenStack deploys on ARM servers as easily as on x86. Whether you have Qualcomm 2400 REP boards, Cavium ThunderX boards, HiSilicon D05 boards, or other Ubuntu Certified server hardware, you can go from bare metal to a working OpenStack in minutes!

The following tutorial will walk you through building a simple Ubuntu OpenStack setup, highlighting any ARM-specific caveats along the way.

Note: very little here is actually ARM specific – you could just as easily follow this to setup an x86 OpenStack.

Juju and MAAS

Ubuntu OpenStack is deployed using MAAS and Juju. If you’re unfamiliar with these tools, let me give you a quick overview.

MAAS is a service that manages clusters of bare-metal servers in a manner similar to cloud instances. Using the web interface, or its API, you can ask MAAS to power on one or more servers and deploy an OS to them, ready for login. In this tutorial, we’ll be adding your ARM servers to a MAAS cluster, so that Juju can deploy and manage them via the MAAS API.

Juju is a workload orchestration tool. It takes definitions of workloads, called bundles, and realizes them in a given cloud environment. In this case, we’ll be deploying Ubuntu’s openstack-base bundle to your MAAS cloud environment.

Hardware Requirements

A minimal Ubuntu OpenStack setup on ARM comprises:

  • 5 ARM server nodes for your MAAS cluster. 4 of these will be used to run OpenStack services, the 5th will operate a Juju controller that manages the deployment.
    • Each system needs to have 2 disks (the second is for ceph storage).
    • Each system needs to have 2 network adapters. To keep this simple, it’s best if the network adapters are identically configured (same NICs, and if plug-in NICs are used, same slots).
    • Each node should be configured to PXE boot by default. If you have one of these systems, checkout the “MAAS Notes” section on the Ubuntu wiki for tips:
  • 1 server to run the MAAS server
    • CPU architecture doesn’t matter.
    • Install this server with Ubuntu Server 16.04. A clean “Basic” installation is recommended.
    • >= 10GB of free disk space.
    • >= 2GB of RAM
  • 1 client system for you to use to execute juju and openstack client commands to initiate, monitor and test out the deployment.
    • Make sure this is a system that can run a web browser, so you can use it to view the Juju, MAAS and OpenStack GUIs.
    • Ubuntu 16.04 is recommended (that’s what we tested with).
Network Layout

Again for simplicity, this tutorial will assume that everything (both NICs of each ARM server, ARM server BMCs, MAAS server, client system and your OpenStack floating IPs) are all on the same flat network (you’ll want more segregation in a production deployment). Cabling should look like the following figure: We’re using a network throughout this tutorial. MAAS will provide a DHCP server for this subnet, so be sure to deactivate any other DHCP servers to avoid interference.

Network Planning

Since all of your IPs will be sharing a single subnet, you should prepare a plan in advance for how you want to split up the IPs to avoid accidental overlap. For example, with our network, you might allocate:

  • – Gateway
  • – Static IPs (MAAS Server, client system, ARM Server BMCs, etc.)
  • MAAS node IP pool (IPs MAAS is allowed to assign to your ARM Server nodes).
  • OpenStack floating IP pool (for your OpenStack instances)
OK. Let’s get to it. MAAS Server Installation

On the MAAS Server, run the following command sequence to install the latest version of MAAS:

sudo apt-add-repository ppa:maas/stable -y sudo apt update sudo apt install maas -y

Once MAAS is installed, run the following command to setup admin username and password:

ubuntu@maas:~$ sudo maas createadmin Username: ubuntu Password: Again: Email: Import SSH keys [] (lp:user-id or gh:user-id): lp:<lpuserid>

Using a web browser from the client system, connect to the MAAS web interface. It is at http://<MAAS Server IP addr>/MAAS :

Login with the admin credentials you just created. Select arm64 in Architectures of image sources and click “Update Selection”. Wait for image download and sync. After all images are synced, Click “Continue”.

Import one or more ssh keys. You can paste them in, or easily import from Launchpad or GitHub:

After basic setup, Goto “Subnets” tag, and click the subnet address:

Provide the correct “Gateway IP” and “DNS” address for your subnet:

Next, goto “Subnets” tag and click untagged VLAN:

Select “Provide dhcp” in the “Take action” pulldown:

Many ARM servers (other than X-Gene/X-Gene 2 systems and Cavium ThunderX CRBs) require the 16.04 HWE kernel, so we need to configure MAAS to use it by default. Go to the “Settings” tab and select “xenial (hwe-16.04)” as the Default Minimum Kernel Version for Commissioning, then click “Save”:

Enlisting and Commissioning Nodes

In order for MAAS to manage your ARM Server nodes, they need to be first enlisted into MAAS, then commissioned. To do so, power on the node, and allow it to PXE boot from the MAAS server. This should cause the node to appear with a randomly generated name on the “Nodes” page:

Click on the Node name, and select “Commission” in the “Take action” menu. This will begin a system inventory process after which the node’s status will become “Ready”. Repeat for all other nodes.

Testing out MAAS

Before we deploy OpenStack, it’d be good to first demonstrate that your MAAS cluster is functioning properly. From the “Nodes” page in the MAAS UI, select a node and choose the “Deploy” action in the “Take action” pulldown:

When status becomes “Deployed”, you can ssh into the node with username “ubuntu” and your ssh private key. You can find a node’s IP address by clicking the node’s hostname and looking at the Interfaces tab:

Now ssh to that node with username “ubuntu” and the ssh key you configured in MAAS earlier:

All good? OK – release the node back to the cluster via the MAAS UI, and let’s move onto deploying OpenStack!

Deploying OpenStack

Download the bundle .zip file from to your client system and extract it:

ubuntu@jujuclient$ sudo apt install unzip -y ubuntu@jujuclient$ unzip

The following files will be extracted:

  • bundle.yaml: This file defines the modeling and placement of services across your OpenStack cluster
  • neutron-ext-net, neutron-tenant-net: scripts to help configure your OpenStack networks
  • novarc: script to setup your environment to use OpenStack

Next, install the Juju client from the snap store to your client system:

ubuntu@jujuclient$ sudo snap install juju --classic

Then, configure Juju to use your MAAS environment, as described here. After configuring Juju to use your MAAS cluster, run the following command Juju client system to instantiate a Juju controller node:

ubuntu@jujuclient$ juju bootstrap maas-cloud maas \ --bootstrap-constraints arch=arm64

Where “maas-cloud” is the cloud name you asssigned in the “Configure Juju” step. Juju will auto select a node from the MAAS cluster to be the Juju controller, and deploy the node. You can monitor this progress via the MAAS web interface and the console of the bootstrap node. Now, deploy the OpenStack bundle:

  • Locate the bundle.yaml file from the tarball.\
  • Open the bundle.yaml file in a text editor, and locate the data-port setting for the neutron-gateway service.
  • Change the data-port setting as appropriate for the systems in your MAAS cluster. This should be name of the connected NIC interface on your systems that is not managed by MAAS (see the diagram in the “Network Layout” section of this post). For example, to use the second built-in interface on a HiSilicon D05 system, replace “br-ex:eno2” with “br-ex:enahisic2i1″. For more information, see the “Port Configuration” section in the neutron-gateway charm docs.
  • Execute:
ubuntu@jujuclient$ juju deploy bundle.yaml

You can monitor the status of your deployment using the juju “status” command:

ubuntu@jujuclient$ juju status

Note: The deployment is complete once juju status reports all units, other than ntp, as “Unit is ready”. (The ntp charm has not yet been updated to report status, so ntp units will not report a “Unit is ready” message). Note: You can also view a graphical representation of the deployment and it’s status using the juju gui web interface:

ubuntu@jujuclient$ juju gui GUI 2.10.2 for model "admin/default" is enabled at: Your login credential is: username: admin password: d954cc41130218e590c62075de0851df

Troubleshooting Deployment If the neutron-gateway charm enters a “failed” state, it maybe because you have entered an invalid interface for the data-port config setting. You can change this setting after deploying the bundle using the juju set-config command, and asking the unit to retry:

ubuntu@jujuclient$ juju config neutron-gateway data-port=br-ex:<iface> ubuntu@jujuclient$ juju resolved neutron-gateway/0

If the device name is not consistent between hosts, you can specify the same bridge multiple times with MAC addresses instead of interface names. The charm will loop through the list and configure the first matching interface. To do so, specify a list of macs using a space delimiter as seen in the example below:

ubuntu@jujuclient$ juju config neutron-gateway data-port=br-ex:<MAC> br-ex:<MAC> br-ex:<MAC> br-ex:<MAC> ubuntu@jujuclient$ juju resolved neutron-gateway/0 Testing it Out

See the “Ensure it’s working” section of the following document to complete a sample configuration and launch a test instance: (^ This is a fork of the main charm docs w/ some corrections pending merge). Finally, you can access the OpenStack web interface at: http://<ip of openstack-dashboard>/horizon To obtain the openstack-dashboard ip address, run:

ubuntu@jujuclient$ juju run --unit openstack-dashboard/0 'unit-get public-address'

Login as user ‘admin’ with password ‘openstack’ Many thanks to Sean Feole for helping draft this guide, and to Michael Reed & Ike Pan for testing it out

Categories: Linux

Andrea Veri: Adding reCAPTCHA v2 support to Mailman

Planet Ubuntu - Mon, 02/26/2018 - 08:13

As a follow-up to the reCAPTCHA v1 post published back in 2014 here it comes an updated version for migrating your Mailman instance off from version 1 (being decommissioned on the 31th of March 2018) to version 2. The original python-recaptcha library was forked into and made compatible with reCAPTCHA version 2.

The relevant changes against the original library can be resumed as follows:

  1. Added ‘version=2’ against displayhtml, load_scripts functions
  2. Introduce the v2submit (along with submit to keep backwards compatibility) function to support reCAPTCHA v2
  3. The updated library is backwards compatible with version 1 to avoid unexpected code breakages for instances still running version 1

The required changes are located on the following files:


--- 2018-02-26 14:56:48.000000000 +0000 +++ /usr/lib/mailman/Mailman/Cgi/ 2018-02-26 14:08:34.000000000 +0000 @@ -31,6 +31,7 @@ from Mailman import i18n from Mailman.htmlformat import * from Mailman.Logging.Syslog import syslog +from recaptcha.client import captcha # Set up i18n _ = i18n._ @@ -244,6 +245,10 @@ replacements[''] = mlist.FormatFormStart('listinfo') replacements[''] = mlist.FormatBox('fullname', size=30) + # Captcha + replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=True, version=2) + replacements[''] = captcha.load_script(version=2) + # Do the expansion. doc.AddItem(mlist.ParseTags('listinfo.html', replacements, lang)) print doc.Format()


--- 2018-02-26 14:56:38.000000000 +0000 +++ /usr/lib/mailman/Mailman/Cgi/ 2018-02-26 14:08:18.000000000 +0000 @@ -32,6 +32,7 @@ from Mailman.UserDesc import UserDesc from Mailman.htmlformat import * from Mailman.Logging.Syslog import syslog +from recaptcha.client import captcha SLASH = '/' ERRORSEP = '\n\n<p>' @@ -165,6 +166,17 @@ results.append( _('There was no hidden token in your submission or it was corrupted.')) results.append(_('You must GET the form before submitting it.')) + + # recaptcha + captcha_response = captcha.v2submit( + cgidata.getvalue('g-recaptcha-response', ""), + mm_cfg.RECAPTCHA_PRIVATE_KEY, + remote, + ) + + if not captcha_response.is_valid: + results.append(_('Invalid captcha: %s' % captcha_response.error_code)) + # Was an attempt made to subscribe the list to itself? if email == mlist.GetListEmail(): syslog('mischief', 'Attempt to self subscribe %s: %s', email, remote)


--- listinfo.html 2018-02-26 15:02:34.000000000 +0000 +++ /usr/lib/mailman/templates/en/listinfo.html 2018-02-26 14:18:52.000000000 +0000 @@ -3,7 +3,7 @@ <HTML> <HEAD> <TITLE><MM-List-Name> Info Page</TITLE> - + <MM-Recaptcha-Script> </HEAD> <BODY BGCOLOR="#ffffff"> @@ -116,6 +116,11 @@ </tr> <mm-digest-question-end> <tr> + <tr> + <td>Please fill out the following captcha</td> + <td><mm-recaptcha-javascript></TD> + </tr> + <tr> <td colspan="3"> <center><MM-Subscribe-Button></center> </td>

The updated RPMs are being rolled out to Fedora, EPEL 6 and EPEL 7. In the meantime you can find them here.

If Mailman complains about not being able to load recaptcha.client follow these steps:

cd /usr/lib/mailman/pythonlib ln -s /usr/lib/python2.6/site-packages/recaptcha/client recaptcha

And then on {subscribe,listinfo}.py:

import recaptcha

Categories: Linux

Ubuntu Studio: Introducing the potential new Ubuntu Studio Council

Planet Ubuntu - Sat, 02/24/2018 - 09:09
Back in 2016, Set Hallström was elected as the new Team Lead for Ubuntu Studio, just in time for the 16.04 Xenial Long Term Support (LTS) release. It was intended that Ubuntu Studio would be able to utilise Set’s leadership skills at least up until the next LTS release in April 2018. Unfortunately, as happens […]
Categories: Linux

Simon Raffeiner: Open Source Color Management is broken

Planet Ubuntu - Sat, 02/24/2018 - 07:45
Since I am now in the business of photography and image processing (see my travel photography blog here), I thought
Categories: Linux

The Fridge: Xenial 16.04.4 Call For Testing (All Flavours)

Planet Ubuntu - Sat, 02/24/2018 - 01:59
Some time ago our first release candidate builds for all flavours that released with xenial have been posted to the ISO tracker [1] into the 16.04.4 milestone. As with each point-release, we would need volunteers to grab the ISOs of their flavour/flavours of choice and perform general testing. We obviously are mostly looking for regressions from 16.04.3, but please fill in any bugs you encounter (against the respective source packages on Launchpad). There is still time until the target release date on 1st of March, but for now we're not considering pulling in any more fixes besides ones for potential release-blockers that we encounter. With enough luck the images that have been made available just now might be the ones we release on Thursday. Thank you! [1] Originally posted to the Ubuntu Release mailing list on Fri Feb 23 22:33:06 UTC 2018 by Lukasz Zemczak, on behalf of the Ubuntu Release Team
Categories: Linux

Benjamin Mako Hill: “Stop Mang Fun of Me”

Planet Ubuntu - Fri, 02/23/2018 - 12:45

Somebody recently asked me if I am the star of quote #75514 (a snippet of online chat from a large collaboratively built collection):

<mako> my letter "eye" stopped worng <luca> k, too? <mako> yeah <luca> sounds like a mountain dew spill <mako> and comma <mako> those three <mako> ths s horrble <luca> tme for a new eyboard <luca> 've successfully taen my eyboard apart and fxed t by cleanng t wth alcohol <mako> stop mang fun of me <mako> ths s a laptop!!

It was me. A circuit on my laptop had just blown out my I, K, ,, and 8 keys. At the time I didn’t think it was very funny.

I no idea anyone had saved a log and had forgotten about the experience until I saw the quote. I appreciate it now so I’m glad somebody did!

This was unrelated to the time that I poured water into two computers in front of 1,500 people and the time that I carefully placed my laptop into a full bucket of water.

Categories: Linux

Ubuntu Insights: Ubuntu Desktop weekly update – February 23, 2018

Planet Ubuntu - Fri, 02/23/2018 - 11:31

  • We’ve been working on a GNOME Online Accounts plugin for Ubuntu One. This will allow you to manage your U1 credentials and share them with services which need them, for example Canonical LivePatch. A MR is being proposed upstream.
  • A bug has been fixed which caused some high contrast icons to be missing in System Settings.
  • A fix for centering the date alignment in GNOME Shell has been merged upstream. And we’ve got an upstream review for the work to allow the volume to to amplified above 100%.
  • There was a regression in the video acceleration work which caused corruption. This has been bisected and now fixed upstream.
  • GNOME Shell performance with two monitors has had some more work done and should be fixed soon.
  • Alexander Larsson and James Henstridge have been working on bringing snap support to portals. This work is happening upstream and making good progress. You can read more here.
    The associated work in snapd is here.
  • In our team meeting this week we decided not to ship tracker by default in 18.04 but will look to enable it in 18.10. You can read more about what Tracker is and does here.
  • GNOME Software in Bionic has support for snap channels now.
  • The daily Bionic ISOs now feature a seeded snap (GNOME Calculator) in place of the deb. This is an early test to help us iron out the bugs before we look to move more applications to snaps in the final release.
  • Network Manager 1.10 is now in Bionic proposed. Initial testing looks good, but if you find any problems please log a bug.
  • We’ve refreshed the patches to enable hardware accelerated video playback in Chromium. We will distro patch this until such time as it lands upstream.
  • We’ve also been doing work to better support the onscreen keyboard in Chromium under GNOME Shell, and you can test that work here.
  • We’ve landed some improvements to BAMF to better match snap applications under Unity 7.
  • BlueZ 5.48 has landed in Bionic.
  • Support for themes in snaps is progressing well. You can read more about that work here.
  • Chromium
    • Updated stable to 64.0.3282.167
    • Updated beta to 65.0.3325.73
  • LibreOffice is available in stable channel now. And 14.04, 16.04 and 17.10 all have a number of security updates.
  • A stack of GNOME 3.27 updates have landed in Bionic. This includes things like gnome-keyring, gnome-desktop, gjs, evince, devhelp, dconf-editor, gnome-online-accounts, gvfs, orca screen reader and some games. This is all in preparation for the move to GNOME 3.28 for release.
  • Webkitgtk 2.19 has also landed.

As always, you can comment or discuss any of these changes on the Community Hub.


Categories: Linux

Jo Shields: Update on MonoDevelop Linux releases

Planet Ubuntu - Fri, 02/23/2018 - 07:19

Once upon a time, had two package repositories – one for RPM files, one for Deb files. This, as it turned out, was untenable – just building on an old distribution was insufficient to offer “works on everything” packages, due to dependent library APIs not being necessarily forward-compatible. For example, openSUSE users could not install MonoDevelop, because the versions of libgcrypt, libssl, and libcurl on their systems were simply incompatible with those on CentOS 7. MonoDevelop packages were essentially abandoned as unmaintainable.

Then, nearly 2 years ago, a reprieve – a trend towards development of cross-distribution packaging systems made it viable to offer MonoDevelop in a form which did not care about openSUSE or CentOS or Ubuntu or Debian having incompatible libraries. A release was made using Flatpak (born xdg-app). And whilst this solved a host of distribution problems, it introduced new usability problems. Flatpak means sandboxing, and without explicit support for sandbox escape at the appropriate moment, users would be faced with a different experience than the one they expected (e.g. not being able to P/Invoke libraries in /usr/lib, as the sandbox’s /usr/lib is different).

In 2 years of on-off development (mostly off – I have a lot of responsibilities and this was low priority), I wasn’t able to add enough sandbox awareness to the core of MonoDevelop to make the experience inside the sandbox feel as natural as the experience outside it. The only community contribution to make the process easier was this pull request against DBus#, which helped me make a series of improvements, but not at a sufficient rate to make a “fully Sandbox-capable” version any time soon.

In the interim between giving up on MonoDevelop packages and now, I built infrastructure within our CI system for building and publishing packages targeting multiple distributions (not the multi-distribution packages of yesteryear). And so to today, when recent MonoDevelop .debs and .rpms are or will imminently be available in our Preview repositories. Yes it’s fully installed in /usr, no sandboxing. You can run it as root if that’s your deal.

Where’s the ARM builds?

Where’s the ARM64 builds?

Why aren’t you offering builds for $DISTRIBUTION?

It’s already an inordinate amount of work to support the 10(!) distributions I already do. Especially when, due to an SSL state engine bug in all versions of Mono prior to 5.12, nuget restore in the MonoDevelop project fails about 40% of the time. With 12 (currently) builds running concurrently, the likelihood of a successful publication of a known-good release is about 0.2%. I’m on build attempt 34 since my last packaging fix, at time of writing.

Can this go into my distribution now?

Oh God no. make dist should generate tarballs which at least work now, but they’re very much not distribution-quality. See here.

What about Xamarin Studio/Visual Studio for Mac for Linux?

Probably dead, for now. Not that it ever existed, of course. *cough*. But if it did exist, a major point of concern for making something capital-S-Supportable (VS Enterprise is about six thousand dollars) is being able to offer a trustworthy, integration-tested product. There are hundreds of lines of patches applied to “the stack” in Mac releases of Visual Studio for Mac, Xamarin.Whatever, and Mono. Hundreds to Gtk+2 alone. How can we charge people money for a product which might glitch randomly because the version of Gtk+2 in the user’s distribution behaves weirdly in some circumstances? If we can’t control the stack, we can’t integration test, and if we can’t integration test, we can’t make a capital-P Product. The frustrating part of it all is that the usability issues of MonoDevelop in a sandbox don’t apply to the project types used by Xamarin Studio/VSfM developers. Android development end-to-end works fine. Better than Mac/Windows in some cases, in fact (e.g. virtualization on AMD processors). But because making Gtk#2 apps sucks in MonoDevelop, users aren’t interested. And without community buy-in on MonoDevelop, there’s just no scope for making MonoDevelop-plus-proprietary-bits.

Why does the web stuff not work?

WebkitGtk dropped support for Gtk+2 years ago. It worked in Flatpak MonoDevelop because we built an old WebkitGtk, for use by widgets.

Aren’t distributions talking about getting rid of Gtk+2?


Categories: Linux

Simos Xenitellis: Checking the Ubuntu Linux kernel updates on Spectre and Meltdown (22 February 2018)

Planet Ubuntu - Thu, 02/22/2018 - 08:04
In the post Checking the Ubuntu Linux kernel updates on Spectre and Meltdown we saw the initial support of countermeasures in the Ubuntu Linux kernel for Spectre and Meltdown. Here is the output of the spectre-meltdown-checker script when I run it on 26th January 2018 (Ubuntu Linux kernel HWE, Today there was a kernel update …

Continue reading

Categories: Linux


Subscribe to Bill's Place aggregator - Linux