Linux Plumbers Conference 2019

Europe/Lisbon

Description

September 9-11, Lisbon, Portugal

The Linux Plumbers Conference is the premier event for developers working at all levels of the plumbing layer and beyond.  LPC 2019 will be held September 9-11 in Lisbon, Portugal.  We are looking forward to seeing you there!

    • 10:00 13:30
      Distribution Kernels MC

      The upstream kernel community is where active kernel development happens but the majority of kernels deployed do not come directly from upstream but distributions. "Distribution" here can refer to a traditional Linux distribution such as Debian or Gentoo but also Android or a custom cloud distribution. The goal of this Microconference is to discuss common problems that arise when trying to maintain a kernel.

      Expected topics
      Backporting kernel patches and how to make it easier
      Consuming the stable kernel trees
      Automated testing for distributions
      Managing ABIs
      Distribution packaging/infrastructure
      Cross distribution bug reporting and tracking
      Common distribution kconfig
      Distribution default settings
      Which patch sets are distributions carrying?
      More to be added based on CfP for this microconference

      "Distribution kernel" is used in a very broad manner. If you maintain a kernel tree for use by others, we welcome you to come and share your experiences.

      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC lead
      Laura Abbott labbott@redhat.com

    • 10:00 19:30
      Kernel Summit Track
      • 11:30
        Break 30m
      • 13:30
        Lunch 1h 30m
      • 16:30
        Break 30m
      • 18:30
        TAB Elections 1h
    • 10:00 18:30
      LPC Refereed Track
      • 10:00
        Maintaining out of tree patches over the long term 45m

        The PREEMPT_RT patchset is the longest existing large patchset living outside the Linux kernel. Over the years, the realtime developers had to maintain several stable kernel versions of the patchset. This talk will present the lessons learned from this experience, including workflow, tooling and release management that has proven over time to scale. The workflow deals with upstream changes and changes to the patchset itself. Now that the PREEMPT_RT patchset is about the be merged upstream, we want to share our toolset and methods with others who may be able to benefit from our experience.

        This talk is for people who want to maintain an external patchset with stable releases.

      • 10:45
        Core Scheduling: Taming Hyper-Threads to be secure 45m

        Last couple of years, we have witnessed an onslaught of vulnerabilities in the design and architecture of cpus. It is interesting and surprising to note that the vulnerabilities are mainly targeting the features designed to improve the performance of cpus - most notable being the hyperthreading(smt). While some of the vulnerabilities could be mitigated in software and cpu microcodes, couple of others didn't have any satisfiable mitigation other than making sure that smt is off and every context switch needed to flush the cache to clear the data used by the task that is being switched out. Turning smt off is not a viable alternative to many production scenarios like cloud environment where you lose a considerable amount of computing power by turning off smt. To address this, there have been community efforts to keep smt on while trying to make sure that non-trusting applications are never run concurrently in the hyperthreads of the core, they have been widely called as core scheduling.

        This talk is about the development, testing and profiling efforts of core scheduling in the community. There were multiple proof of concepts - while differing in the design, ultimately trying to make sure that only mutually trusted applications run concurrently on the core. We discuss the design, implementation and performance of the POCs. We also discuss the profiling attempts to understand the correctness and performance of the patches - various powerful kernel features that we leveraged to get the most time sensitive data from the kernel to understand the effect of scheduler with the core scheduling feature. We plan to conclude with a brief discussion of the future directions of core scheduling.

        The core idea about core scheduling is to have smt on and make sure that only trusted applications run concurrently on siblings of a core. If there are no group of trusting applications runnable on the core, we need to make sure that remaining siblings should idle while applications run in isolation on the core. This should also consider the performance aspects of the system. Theoretically it is impossible to reach the same level of performance where the cores are allowed to any runnable applications. But if the performance of core scheduling is worse than or same as the smt off situation, we do not gain anything from this feature other than the added complexity in the scheduler. So the idea is to achieve a considerable boost in performance compared to smt-off for the majority of production workloads.

        Security boundary is another aspect of critical importance in core scheduling. What should be considered as a trust boundary? Should it be at the user/group level, process level or thread level? Should kernel be considered trusty by applications or vice-versa? With virtualization and nested virtualization in picture, this gets even more complicated. But answers to most of these questions are environment and workload dependent and hence these are implemented as policies rather than hardcoding in the code. And then arises the question - how the policies should be implemented? Kernel has a variety of mechanisms to implement these kind of policies and the proof of concepts posted upstream mainly uses cgroups. This talk also discusses other viable options for implementing the policies.

      • 11:30
        Break 30m
      • 12:00
        Scaling performance profiling infrastructure for data centers 45m

        Understanding Application performance and utilization characteristics is critically important for cloud-based computing infrastructure. Minor improvements in predictability and performance of tasks can result in large savings. Google runs all workloads inside containers and as such, cgroup performance monitoring is heavily utilized for profiling. We rely on two approaches built on Linux performance monitoring infrastructure to provide task, machine, and fleet performance views and trends. A sampling approach collect metrics across the machine and try to attribute it back to cgroups while a counting approach tracks when a cgroup is scheduled and maintains state per cgroup. There are number of trade-offs associated with both approaches. We will present an overview and associated use-cases for both approaches at Google.

        As the servers have gotten bigger, number of cores and containers on a machine have grown significantly. With the bigger scale, interference is a bigger problem for multi-tenant machines and performance profiling becomes even more critical. However, we have hit multiple issues in scaling the underlying Linux performance monitoring infrastructure to provide fresh and accurate data for our fleet. The performance profiling has to deal with the following issues:

        • Interference: To be tolerated by workloads, monitoring overhead
          should be minimal - usually below 2%, some latency-sensitive workloads
          are certainly even less tolerant than that. As we gain more
          introspection into our workloads, we end up having to use more and
          more events, to pinpoint certain bottlenecks. That unavoidably
          incurs event multiplexing as the number of core hardware counters is
          very limited compared to containers profiled and number of events monitored. Adding counters is not free in hardware and similarly in the kernel as more
          work registers must be saved and restored on context switches which can cause jitters for applications being profiled.
        • Accuracy: Sampling at machine level reduces some of the associated costs, but attributing the counters back to containers is lossy and we see a large drop in accuracy of profiling. The attribution gets progressively worse as we move to bigger machines with large number of threads. The attribution errors severely limit the granularity of performance improvements and degradations we can measure in our fleet.
        • Kernel overheads: Perf_events event multiplexing is a complex and expensive algorithm that is especially taxing when run in cgroup mode. As implemented, scheduling of cgroup events is bound by the number of cgroup events per-cpu and not the number of counters, unlike regular per-cpu monitoring. To get a consistent view of activity on a server, Google needs to periodically count events per-cgroup. Cgroup monitoring is preferred over per-thread monitoring because Google workloads tend to use an extensive number of threads, so that would be prohibitively expensive to use. We have explored ways to avoid these scaling issues and make event multiplexing faster.
        • User-space overheads: The bigger the machines, the larger the volume of profiling data generated. Google relies extensively on the perf record tool to collect profiles. There are significant user-space overheads to merge the per-cpu profiles and post-process for attribution. As we look to make perf-record multi-threaded for scalability, data collection and merging becomes yet another challenge.
        • Symbolization overheads : Perf tools rely on /proc/PID/maps to understand process mappings and to symbolize samples. The parsing and scanning of /proc/PID/maps is time-consuming with large overheads. It is also riddled with race conditions as processes are created and destroyed during parsing.

        These are some of the challenges we have encountered while using perf_events and the perf tool at scale. To continue to make this infrastructure popular, it needs to adapt to new hardware and data-center realities fast now. We are planning to share our findings and optimizations followed by an open discussion on how to best solve these challenges.

      • 12:45
        printk: Why is it so complicated? 45m

        The printk() function has a long history of issues and has undergone many iterations to improve performance and reliability. Yet it is still not an acceptable solution to reliably allow the kernel to send detailed information to the user. And these problems are even magnified when using a real-time system. So why is printk() so complicated and why are we having such a hard time finding a good solution?

        This talk will briefly cover the history of printk() and why the recent major rework was necessary. It will go through the details of the rework and why we believe it solves many of the issues. And it will present the issues still not solved (such as fully synchronous console writing), why these issues are particularly complex and controversial, and review some of the proposed solutions for moving forward.

        This talk may be of particular interest to developers with experience or interest in lockless ring buffers, memory barriers, and NMI-safe synchronization.

      • 13:30
        Lunch 1h 30m
      • 15:00
        What does remote attestation buy you? 45m

        TPM remote attestation (a mechanism allowing remote sites to ask a computer to prove what software it booted) was an object of fear in the open source community in the 2000s, a potential existential threat to Linux's ability to interact with the free internet. These concerns have largely not been realised, and now there's increasing interest in ways we can use remote attestation to improve security while avoiding privacy concerns or attacks on user freedom.

        More modern uses of remote attestation include simplifying deployment of machines to remote locations, easy recovery of systems with nothing more than a network connection, automatic issuance of machine identity tokens, trust-based access control to sensitive resources and more. We've released a full implementation, so this presentation will discuss how it can be tied in to various layers of the Linux stack in ways that give us new functionality without sacrificing security or freedom.

      • 15:45
        Linux kernel fastboot on the way 45m

        Linux kernel fastboot is critical for all kinds of platforms: from embedded/smartphone to desktop/cloud, and it has been hugely improved over years. But, is it all done? Not yet!

        This topic will first share the optimizations done for our platform, which cut the kernel (inside a VM) bootime from 3000ms to 300ms, and then list the future potential optimization points.

        Here are our optimizations:
        1. really enable device drivers' asynchronous probing, like i915 to improve boot parallelization
        2. deferred memory init leveraging memory hotplug feature
        3. Optimize rootfs mounting (including storage driver and mounting)
        4. kernel modules and configs optimization
        5. reduce the hypervisor cost
        6. tools for profiling/analyzing

        Potential optimizations spots for future, which needs discussion and collaboration from the whole community:
        1. how to make maximal use of multi-core and effectively distribute boot tasks to each core
        2. smp init for each CPU core costs about 8ms, a big burden for large systems
        3. force highest cpufreq as early as possible (kernel decompress time)
        4. devices enumeration for firmware (like ACPI) set to be parallel
        5. in-kernel deferred memory init (for 4GB+ platform)
        6. user space optimization like systemd

      • 16:30
        Break 30m
      • 17:00
        Red Hat joins CI party, brings cookies 45m

        For the past couple of years the CKI ("cookie") project at Red Hat has been transforming the way the company tests kernels, going from staged testing to continuous integration. We've been testing patches posted to internal maillists, responding with our results, and last year we started testing stable queues maintained by Greg KH, posting results to the "stable" maillist.

        Now we'd like to expand our efforts to more upstream maillists, and join forces with CI systems already out there. We'll introduce you to the way our CI works, which tests we run, our extensive park of hardware, and how we report results. We'd like to hear what you need from a CI system, and how we can improve. We'd like to invite you to cooperation, both long-term, and right there, at a hackfest organized during the conference.

        Naturally, real cookies will make an appearance.

      • 17:45
        To be announced 45m
    • 10:00 18:30
      Networking Summit Track
      • 11:30
        Break 30m
      • 13:30
        Lunch 1h 30m
      • 16:30
        Break 30m
    • 10:00 13:30
      Toolchains MC

      The goal of the Toolchains Microconference is to focus on specific topics related to the GNU Toolchain and Clang/LLVM that have a direct impact in the development of the Linux kernel.

      The intention is to have a very practical MC, where toolchain and kernel hackers can engage and, together:

      Identify problems, needs and challenges.
      Propose, discuss and agree on solutions for these specific problems.
      Coordinate on how to implement the solutions, in terms of interfaces, patches submissions, etc in both kernel and toolchain component.
      

      Consequently, we will discourage vague and general "presentations" in favor of concreteness and to-the-point discussions, encouraging the participation of everyone present.

      Examples of topics to cover:

      Header harmonization between kernel and glibc.
      Wrapping syscalls in glibc.
      eBPF support in toolchains.
      Potential impact/benefit/detriment of recently developed GCC optimizations on the kernel.
      Kernel hot-patching and GCC.
      Online debugging information: CTF and BTF
      

      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Jose E. Marchesi jose.marchesi@oracle.com and Elena Zannoni ezannoni@gmail.com

    • 10:00 13:30
      VFIO/IOMMU/PCI MC

      The PCI interconnect specification and the devices implementing it are incorporating more and more features aimed at high performance systems (eg RDMA, peer-to-peer, CCIX, PCI ATS (Address Translation Service)/PRI(Page Request Interface), enabling Shared Virtual Addressing (SVA) between devices and CPUs), that require the kernel to coordinate the PCI devices, the IOMMUs they are connected to and the VFIO layer used to managed them (for userspace access and device passthrough) with related kernel interfaces that have to be designed in-sync for all three subsystems.

      The kernel code that enables these new system features requires coordination between VFIO/IOMMU/PCI subsystems, so that kernel interfaces and userspace APIs can be designed in a clean way.

      Following up the successful LPC 2017 VFIO/IOMMU/PCI microconference, the Linux Plumbers 2019 VFIO/IOMMU/PCI track will therefore focus on promoting discussions on the current kernel patches aimed at VFIO/IOMMU/PCI subsystems with specific sessions targeting discussion for kernel patches that enable technology (eg device/sub-device assignment, peer-to-peer PCI, IOMMU enhancements) requiring the three subsystems coordination; the microconference will also cover VFIO/IOMMU/PCI subsystem specific tracks to debate patches status for the respective subsystems plumbing.

      Tentative topics for discussion:

      VFIO
      Shared Virtual Addressing (SVA) interface
      SRIOV/PASID integration
      Device assignment/sub-assignment
      IOMMU
      IOMMU drivers SVA interface consolidation
      IOMMUs virtualization
      IOMMU-API enhancements for mediated devices/SVA
      Possible IOMMU core changes (like splitting up iommu_ops, better integration with device-driver core)
      DMA-API layer interactions and how to get towards generic dma-ops for IOMMU drivers
      PCI
      Resources claiming/assignment consolidation
      Peer-to-Peer
      PCI error management
      PCI endpoint subsystem
      prefetchable vs non-prefetchable BAR address mappings (cacheability)
      Kernel NoSnoop TLP attribute handling
      CCIX and accelerators management
      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Bjorn Helgaas bjorn@helgaas.com, Lorenzo Pieralisi lorenzo.pieralisi@arm.com, Joerg Roedel joro@8bytes.org, and Alex Williamson alex.williamson@redhat.com

    • 15:00 18:30
      Scheduler MC

      The Linux Plumbers 2019 Scheduler Microconference is about all scheduler topics, which are not Realtime

      Potential topics:
      - Load Balancer Rework - prototype
      - Idle Balance optimizations
      - Flattening the group scheduling hierrachy
      - Core scheduling
      - Proxy Execution for CFS
      - Improving scheduling latency with SCHED_IDLE task
      - Scheduler tunables - Mobile vs Server
      - nohz
      - LISA for scheduler verification

      We plan to continue the discussions that started at OSPM in May'19 and get a wider audience outside the core scheduler developers at LPC.

      Potential attendees:
      Juri Lelli
      Vincent Guittot
      Subhra Mazumdar
      Daniel Bristot
      Dhaval Giani
      PeterZ
      Paul Turner
      Rik van Riel
      Patrick Bellasi
      Morten Rasmussen
      Dietmar Eggman
      Steven Rostedt
      Thomas Gleixner
      Viresh Kumar
      Phil Auld
      Waiman Long
      Josef Bacik
      Joel Fernandes
      Paul McKenney
      Alessio Balsini
      Frederic Weisbecker

      This microconference is picking scheduler topics which are not RT, but this should take place either immediately before or after that MC.

      MC leads:
      Juri Lelli juri.lelli@redhat.com, Vincent Guittot vincent.guittot@linaro.org, Daniel Bristot de Oliveira bristot@redhat.com, Subhra Mazumdar subhra.mazumdar@oracle.com, Dhaval Giani dhaval.giani@gmail.com

    • 15:00 18:30
      System Boot and Security MC
    • 15:00 18:30
      You, Me, and IoT MC

      The Internet of Things (IoT) has been growing at an incredible pace as of late.

      Some IoT application frameworks expose a model-based view of endpoints, such as

      on-off switches
      dimmable switches
      temperature controls
      door and window sensors
      metering
      cameras
      

      Other IoT application frameworks provide direct device access, by creating real and virtual device pairs that communicate over the network. In those cases, writing to the virtual /dev node on a client affects the real /dev node on the server. Examples are

      GPIO (/dev/gpiochipN)
      I2C (/dev/i2cN)
      SPI (/dev/spiN)
      UART (/dev/ttySN)
      

      Interoperability (e.g. ZigBee to Thread) has been a large focus of many vendors due to the surge in popularity of voice-recognition in smart devices and the markets that they are driving. Corporate heavyweights are in full force in those economies. OpenHAB, on the other hand, has become relatively mature as a technology and vendor agnostic open-source front-end for interacting with multiple different IoT frameworks.

      The Linux Foundation has made excellent progress bringing together the business community around the Zephyr RTOS, although there are also plenty of other open-source RTOS solutions available. The linux-wpan developers have brought 6LowPan to the community, which works over 802.15.4 and Bluetooth, and that has paved the way for Thread, LoRa, and others. However, some closed or quasi-closed standards must rely on bridging techniques mainly due to license incompatibility. For that reason, it is helpful for the kernel community to preemptively start working on application layer frameworks and bridges, both community-driven and business-driven.

      For completely open-source implementations, experimental results have shown results with Greybus, with a significant amount of code already in staging. The immediate benefits to the community in that case are clear. There are a variety of key subjects below the application layer that come into play for Greybus and other frameworks that are actively under development, such as

      Device Management
      are devices abstracted through an API or is a virtual /dev node provided?
      unique ID / management of possibly many virtual /dev nodes and connection info
      Network Management
      standards are nice (e.g. 802.15.4) and help to streamline in-tree support
      non-standard tech best to keep out of tree?
      userspace utilities beyond command-line (e.g. NetworkManager, NetLink extensions)
      Network Authentication
      re-use machinery for e.g. 802.11 / 802.15.4 ?
      generic approach for other MAC layers ?
      Encryption
      in userspace via e.g. SSL, /dev/crypto
      Firmware Updates
      generally different protocol for each IoT framework / application layer
      Linux solutions should re-use components e.g. SWUpdate
      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      This Microconference will be a meeting ground for industry and hobbyist contributors alike and promises to shed some light on the what is yet to come. There might even be a sneak peak at some new OSHW IoT developer kits.

      The hope is that some of the more experienced maintainers in linux-wpan, LoRa and OpenHAB can provide feedback and suggestions for those who are actively developing open-source IoT frameworks, protocols, and hardware.

      MC leads
      Christopher Friedt chris@friedt.co, Jason Kridner jkridner@beagleboard.org, and Drew Fustini drew@beagleboard.org

    • 10:00 13:30
      Databases MC

      Databases utilize and depend on a variety of kernel interfaces and are critically dependent on their specification, conformance to specification, and performance. Failure in any of these results in data loss, loss in revenue, or degraded experience or if discovered early, software debt. Specific interfaces can also remove small or large parts of user space code creating greater efficiencies.

      This microconference will get a group of database developers together to talk about how their databases work, along with kernel developers currently developing a particular database-focused technology to talk about its interfaces and intended use.

      Database developers are expected to cover:

      The architecture of their database;
      The kernel interfaces utilized, particularly those critical to performance and integrity
      What is a general performance profile of their database with respect to kernel interfaces;
      What kernel difficulties they have experienced;
      What kernel interfaces are particularly useful;
      What kernel interfaces would have been nice to use, but were discounted for a particular reason;
      Particular pieces of their codebase that have convoluted implementations due to missing syscalls; and
      The direction of database development and what interfaces to newer hardware, like NVDIMM, atomic write storage, would be desirable.
      The aim for kernel developers attending is to:

      Gain a relationship with database developers;
      Understand where in development kernel code they will need additional input by database developers;
      Gain an understanding on how to run database performance tests (or at least who to ask);
      Gain appreciation for previous work that has been useful; and
      Gain an understanding of what would be useful aspects to improve.
      The aim for database developers attending is to:

      Gain an understanding of who is implementing the functionality they need;
      Gain an understanding of kernel development;
      Learn about kernel features that exist, and how they can be incorporated into their implementation; and
      Learn how to run a test on a new kernel feature.
      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC lead
      Daniel Black daniel@linux.ibm.com

    • 10:00 18:30
      Kernel Summit Track
      • 11:30
        Break 30m
      • 13:30
        Lunch 1h 30m
      • 16:30
        Break 30m
    • 10:00 18:30
      LPC Refereed Track
      • 10:00
        BPF is eating the world, don't you see? 45m

        The BPF VM in the kernel is being used in ever more scenarios where running a restricted, validated program in kernel space provides a super powerful mix of flexibility and performance which is transforming how a kernel work.

        That creates challenges for developers, sysadmins and support engineers, having tools for observing what BPF programs are doing in the system is critical.

        A lot has been done recently in improving tooling such as perf and bpftool to help with that, trying to make BPF fully supported for profiling, annotating, tracing, debugging.

        But not all perf tools can be used with JITed BPF programs right now, areas that need work, such as putting probes and collecting variable contents as well as further utilizing BTF for annotation are areas that require interactions with developers to gather insights for further improvements so as to have the full perf toolchest available for use with BPF programs.

        These recent advances and this quest for feedback about what to do next should be the topic of this talk.

      • 10:45
        oomd2 and beyond: a year of improvements 45m

        Running out of memory on a host is a particularly nasty scenario. In the Linux kernel, if memory is being overcommitted, it results in the kernel out-of-memory (OOM) killer kicking in. Perhaps surprisingly, the kernel does not often handle this well. oomd builds on top of recent kernel development to effectively implement OOM killing in userspace. This results in a faster, more predictable, and more accurate handling of OOM scenarios.

        oomd has gained a number of new features and interesting deployments in the last year. The most notable feature is a complete redesign of the control plane which enables arbitrary but "gotcha"-free configurations. In this talk, Daniel Xu will cover past, present, future, and path-not-taken development plans along with experiences gained from overseeing large deployments of oomd.

      • 11:30
        Break 30m
      • 12:00
        Integration of PM-runtime with System-wide Power Management 45m

        There are two flavors of power management supported by the Linux kernel: system-wide PM based on transitions of the entire system into sleep states and working-state PM focused on controlling individual components when the system as a whole is working. PM-runtime is part of working-state PM concerned about the opportunity to put devices into low-power states when they are not in use.

        Since both PM-runtime and system-wide PM act on devices in a similar way (that is, they both put devices into low-power states and possibly enable them to generate wakeup signals), optimizations related to the handling of already suspended devices can be made, at least in principle. In particular:
        It should be possible to avoid resuming devices already suspended by runtime PM during system-wide PM transitions to sleep states.
        It should be possible to leave devices suspended during system-wide PM transitions to sleep states in PM-runtime suspend while resuming the system from those states.
        * It should be possible to re-use PM-runtime callbacks in device drivers for the handling of system-wide PM.

        These optimizations are done by some drivers, but making them work in general turns out to be a hard problem. They are achieved in different ways by different drivers and some of them are in effect only in specific platform configurations. Moreover, there are no general guidelines or recipes that driver writers can follow in order to arrange for these optimizations to take place. In an attempt to start a discussion on approaching this problem space more consistently, I will give an overview of it, describe the solutions proposed and used so far and suggest some changes that may help to improve the situation.

      • 12:45
        Kernel Address Space Isolation 45m

        Recent vulnerabilities like L1 Terminal Fault (L1TF) and Microarchitectural Data Sampling (MDS) have shown that the cpu hyper-threading architecture is very prone to leaking data with speculative execution attacks.

        Address space separation is a proven technology to prevent side channel vulnerabilities when speculative execution attacks are used. It has, in particular, been successfully used to fix the Meltdown vulnerability with the implementation of Kernel Page Table Isolation (KPTI).

        Kernel Address Space Isolation aims to use address spaces to isolate some parts of the kernel to prevent leaking sensitive data under speculative execution attacks.

        A particularly good example is KVM. When running KVM, a guest VM can use speculative execution attacks to leak data from the sibling hyper-thread, thus potentially accessing data from the host kernel, from the hypervisor or from another VM, as soon as they run on the same hyper-thread.

        If KVM can be run in an address space containing no sensitive data, and separated from the full kernel address space, then KVM would be immune from leaking secrets no matter on which cpu it is running, and no matter what is running on the sibling hyper-threads.

        A first proposal to implement KVM Address Space Isolation has recently been submitted and got some good feedback and discussions:

        https://lkml.org/lkml/2019/5/13/515

        This presentation would show progress and challenges faced while implementing KVM Address Space Isolation. It also looks forward to discuss the possibility to have a more generic kernel address space isolation framework (not limited to KVM), and how it can be interfaced with the current memory management subsystem in particular.

        MERGED with:

        Address space isolation has been used to protect the kernel from the
        userspace and userspace programs from each other since the invention of
        the virtual memory.

        Assuming that kernel bugs and therefore vulnerabilities are inevitable
        it might be worth isolating parts of the kernel to minimize damage
        that these vulnerabilities can cause.

        Recently we've implemented a proof-of-concept for "system call
        isolation (SCI)" mechanism that allows running a system call with
        significantly reduced page tables. In our model, the accesses to a
        significant part of the kernel memory generate page faults, thus
        giving the "core kernel" an opportunity to inspect the access and
        refuse it on a pre-defined policy.

        Our first target for the system call isolation was an attempt to
        prevent ROP gadget execution [1], and despite its weakness it makes a
        ROP attack harder to execute and as a nice side effect SCI can be used
        as Spectre mitigation.

        Another topic of interest is a marriage between namespaces and address
        spaces. For instance, the kernel objects that belong to a particular
        network namespace can be considered as private data and they should
        not be mapped in other network namespaces.

        This data separation greatly reduces the ability of a tenant in one
        namespace to exfiltrate data from a tenant in a different namespace
        via a kernel exploit because the data is no longer mapped in the
        global shared kernel address space.

        We believe it would be helpful to discuss the general idea of address
        space isolation inside the kernel, both from the technical aspect of
        how it can be achieved simply and efficiently and from the isolation
        aspect of what actual security guarantees it usefully provides.

        [1] https://lore.kernel.org/lkml/1556228754-12996-1-git-send-email-rppt@linux.ibm.com/

        Speakers: Alexandre Chartre (Oracle), James Bottomley (IBM), Mike Rapoport (IBM), Joel Nider (IBM Research)
      • 13:30
        Lunch 1h 30m
      • 15:00
        Enabling TPM based system security features 45m

        Nowadays all consumer PC/laptop devices contain TPM2.0 security chip (due to Windows hardware requirements). Also servers and embedded devices increasingly carry these TPMs. It provides several security functions to the system and the user, such as smartcard-like secure keystore and key operations, secure secret storage, bruteforce-protected access control, etc.

        These capabilities can be used in a multitude of scenarios and use cases, including disk encryption, device authentication, user authentication, network authentication, etc. of desktops/laptops, servers, IoTs, mobiles, etc.
        Utilizing the TPM requires several layers of software; the driver (inside the kernel), tpm middleware (a TSS implementation), security middleware (e.g. pkcs11), applications (e.g. ssh).

        This talk first gives an architectural overview of the hard-/software components involved in typical use cases. Then we will dive into a set of concrete use cases and on different ways in which they can be built up; these use cases will be related to device/user authentication around pkcs11 and openssl implementations.

        The talk will end with a list of software and works in progress for introducing TPM functionality to core applications. Finally, a list of potential projects for extending the utilization of the TPM in core software is presented. This latter list shall then drive the discussion on which software is missing or which software has cotributors attending that would like to include such features or which software is currently missing on the list. The current lists of core software are available and updated at https://tpm2-software.github.io/software

        Keywords: core libraries, device support, security, tpm, tss

      • 15:45
        Utilizing tools made for "Big Data" to analyse Ftrace data - making it fast and easy 45m

        Tools based on low level tracing tend to generate large amounts of data, typically outputted in some kind of text or binary format. On the other hand the predefined data analysis features of those tools are often useless when it comes to solving a nontrivial or very user-specific problem. This is when the possibility to make sophisticated analysis via scripting can be extremely useful.

        Fast and easy scripting inside the tracing data is possible if we take advantage of the already existing infrastructure, originally developed for the purposes of the "Big Data" and ML industries. A PoC interface for accessing Ftrace data in Python (via NumPy arrays) will be demonstrated, together with few examples of analysis scripts. Currently the prototype of the interface is implemented as an extension of KernelShark. This is a work in progress, and we hope to receive advice from experts in the field to make sure the end result works seamlessly for them.

      • 16:30
        Break 30m
      • 17:00
        CPU controller on a single runqueue 45m

        The cgroups CPU controller in the Linux scheduler is implemented using hierarchical runqueues, which introduces a lot of complexity, and incurs a large overhead with frequently scheduling workloads. This presentation is about a new design for the cgroups CPU controller, which uses just one runqueue, and instead scales the vruntime by the inverse of the task priority. The goal is to make people familiar with the new design, so they know what is going on, and do not need to spend a month examining kernel/sched/fair.c to figure things out.

      • 17:45
        Formal verification made easy (and fast)! 45m

        Linux is complex, and formal verification has been gaining more and more attention because independent "asserts" in the code can be ambiguous and not cover all the desired points. Formal models aim to avoid such problems of natural language, but the problem is that "formal modeling and verification" sound complex. Things have been changing.

        What if I say it is possible to verify Linux behavior using a formal method?

        • Yes! We already have some models; people have been talking about it, but they seem to be very specific (Memory, Real-time...).

        What if I say it is possible to model many Linux subsystems, to auto-generate code from the model, to run the model on-the-fly, and that this can be as efficient as just tracing?

        • No way!

        Yes! It is! It is hard to believe, I know.

        In this talk, the author will present a methodology based on events and state (automata), and how to model Linux' complex behaviors with small and intuitive models. Then, how to transform the model into efficient C code, that can be loaded into the kernel on-the-fly to verify Linux! Experiments have also shown that this can be as efficient as tracing (sometimes even better)!

        This methodology can be applied on many the kernel subsystems, and the idea of this talk is also to discuss how to proceed towards a more formally verified Linux!

    • 10:00 18:30
      Networking Summit Track
      • 11:30
        Break 30m
      • 13:30
        Lunch 1h 30m
      • 16:30
        Break 30m
    • 10:00 13:30
      Open Printing MC

      The Open Printing (OP) organisation works on the development of new printing architectures, technologies, printing infrastructure, and interface standards for Linux and Unix-style operating systems. OP collaborates with the IEEE-ISTO Printer Working Group (PWG) on IPP projects.

      We maintain cups-filters which allows CUPS to be used on any Unix-based (non-macOS) system. Open Printing also maintains the Foomatic database which is a database-driven system for integrating free software printer drivers with CUPS under Unix. It supports every free software printer driver known to us and every printer known to work with these drivers.

      Today it is very hard to think about printing in UNIX based OSs without the involvement of Open Printing. Open Printing has been successful in implementing driverless printing following the IPP standards proposed by the PWG as well.

      Proposed Topics:

      Working with SANE to make IPP scanning a reality. We need to make scanning work without device drivers similar to driverless printing.
      Common Print Dialog Backends.
      Printer/Scanner Applications - The new format for printer and scanner drivers. A simple daemon emulating a driverless IPP printer and/or scanner.
      The Future of Printer Setup Tools - IPP Driverless Printing and IPP System Service. Controlling tools like cups-browsed (or perhaps also the print dialog backends?) to make the user's print dialogs only showing the relevant ones or to create printer clusters.
      3D Printing without the use of any slicer. A filter that can convert a stl code to a gcode.
      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Till Kamppeter (till.kamppeter@gmail.com ) or Aveek Basu (basu.aveek@gmail.com)

    • 10:00 13:30
      Real Time MC

      Since 2004 a project has improved the Real-time and low-latency features for Linux. This project has become know as PREEMPT_RT, formally the real-time patch. Over the past decade, many parts of the PREEMPT RT became part of the official Linux code base. Examples of what came from PREEMPT_RT include: Real-time mutexes, high-resolution timers, lockdep, ftrace, RT scheduling, SCHED_DEADLINE, RCU_PREEMPT, generic interrupts, priority inheritance futexes, threaded interrupt handlers and more. The number of patches that need integration has been reduced from previous years, and the pieces left are now mature enough to make their way into mainline Linux. This year could possibly be the year PREEMPT_RT is merged (tm)!

      In the final lap of this race, the last patches are on the way to be merged, but there are still some pieces missing. When the merge occurs, PREEMPT_RT will start to follow a new pace: the Linus one. So, it is possible to raise the following discussions:

      The status of the merge, and how can we resolve the last issues that block the merge;
      How can we improve the testing of the -rt, to follow the problems raised as Linus's tree advances;
      What's next?
      Proposed topics:

      Real-time Containers
      Proxy execution discussion
      Merge - what is missing and who can help?
      Rework of softirq - what is need for the -rt merge
      An in-kernel view of Latency
      Ongoing work on RCU that impacts per-cpu threads
      How BPF can influence the PREEMPT_RT kernel latency
      Core-schedule and the RT schedulers
      Stable maintainers tools discussion & improvements.
      Improvements on full CPU isolation
      What tools can we add into tools/ that other kernel developers can use to test and learn about PREEMPT_RT?
      What tests can we add to tools/testing/selftests?
      New tools for timing regression test, e.g. locking, overheads...
      What kernel boot self-tests can be added?
      Discuss various types of failures that can happen with PREEMPT_RT that normally would not happen in the vanilla kernel, e.g, with lockdep, preemption model.
      The continuation of the discussion of topics from last year's microconference, including the development done during this (almost) year, are also welcome!

      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC lead
      Daniel Bristot de Oliveira bristot@redhat.com

    • 15:00 18:30
      Android MC

      Building on the Treble and Generic System Image work, Android is
      further pushing the boundaries of upgradibility and modularization with
      a fairly ambitious goal: Generic Kernel Image (GKI). With GKI, Android
      enablement by silicon vendors would become independent of the Linux
      kernel running on a device. As such, kernels could easily be upgraded
      without requiring any rework of the initial hardware porting efforts.
      Accomplishing this requires several important changes and some of the
      major topics of this year's Android MC at LPC will cover the work
      involved. The Android MC will also cover other topics that had been the
      subject of ongoing conversations in past MCs such as: memory, graphics,
      storage and virtualization.

      Proposed topics include:

      Generic Kernel Image
      ABI Testing Tools
      Android usage of memory pressure signals in userspace low memory killer
      Testing: general issues, frameworks, devices, power, performance, etc.
      DRM/KMS for Android, adoption and upstreaming dmabuf heaps upstreaming
      dmabuf cache managment optimizations
      kernel graphics buffer (dmabuf based)
      SDcardfs
      uid stats
      vma naming
      vitualization/virtio devices (camera/drm)
      libcamera unification
      These talks build on the continuation of the work done last year as reported on the Android MC 2018 Progress report. Specifically:

      Symbol namespaces have gone ahead
      There is continued work on using memory pressure signals for uerspace low memory killing
      Userfs checkpointing has gone ahead with an Android-specific solution
      The work continues on common graphics infrastructure
      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Karim Yaghmour karim.yaghmour@opersys.com, Todd Kjos tkjos@google.com, Sandeep Patil sspatil@google.com, and John Stultz john.stultz@linaro.org

    • 15:00 18:30
      Containers and Checkpoint/Restore MC

      The Containers and Checkpoint/Restore MC at Linux Plumbers is the opportunity for runtime maintainers, kernel developers and others involved with containers on Linux to talk about what they are up to and agree on the next major changes to kernel and userspace.

      Last year's edition covered a range of subjects and a lot of progress has been made on all of them. There is a working prototype for an id shifting filesystem some distributions already choose to include, proper support for running Android in containers via binderfs, seccomp-based syscall interception and improved container migration through the userfaultfd patchsets.

      Last year's success has prompted us to reprise the microconference this year. Topics we would like to cover include:

      Android containers
      Agree on an upstreamable approach to shiftfs
      Securing containres by rethinking parts of ptrace access permissions, restricting or removing the ability to re-open file descriptors through procfs with higher permissions than they were originally created with, and in general how to make procfs more secure or restricted.
      Adoption and transition of cgroup v2 in container workloads
      Upstreaming the time namespace patchset
      Adding a new clone syscall
      Adoption and improvement of the new mount and pidfd APIs
      Improving the state of userfaultfd and its adoption in container runtimes
      Speeding up container live migration
      Address space separation for containers
      More to be added based on CfP for this microconference

      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Stéphane Graber stgraber@stgraber.org, Christian Brauner christian@brauner.io, and Mike Rapoport mike.rapoport@gmail.com

    • 15:00 18:30
      Power Management and Thermal Control MC

      The focus of this MC will be on power-management and thermal-control frameworks, task scheduling in relation to power/energy optimizations and thermal control, platform power-management mechanisms, and thermal-control methods. The goal is to facilitate cross-framework and cross-platform discussions that can help improve power and energy-awareness and thermal control in Linux.

      Prospective topics:

      CPU idle-time management improvements
      Device power management based on platform firmware
      DVFS in Linux
      Energy-aware and thermal-aware scheduling
      Consumer-producer workloads, power distribution
      Thermal-control methods
      Thermal-control frameworks
      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Rafael J. Wysocki (rafael@kernel.org) and Eduardo Valentin (edubezval@gmail.com)

    • 10:00 17:45
      Kernel Summit Track
      • 11:30
        Break 30m
      • 13:30
        Lunch 1h 30m
      • 16:30
        Break 30m
    • 10:00 17:45
      LPC Refereed Track
      • 10:00
        Finding more DRAM 45m

        The demand of DRAM across different platforms is increasing but the cost is not decreasing. Thus DRAM is a major factor of the total cost across all kinds of devices like mobile, desktop or servers. In this talk we will be presenting the work we are doing at Google, applicable to Android, Chrome OS and data center servers, on extracting more memory out of running applications without impacting performance.

        The key is to proactively reclaim idle memory from the running applications. For the Android and Chrome OS, the user space controller can provide hints of the idle memory at the applications level while the servers running multiple workloads, an idle memory tracking mechanism is needed. With such hints the kernel can proactively reclaim memory given that estimated refault cost is not high. Using in-memory compression or second tier memory, the refault cost can be reduced drastically.

        We have developed and deployed the proactive reclaim and idle memory tracking across Google data centers [1]. Defining idle memory as memory not accessed in the last 2 mins, we found 32% idle memory across data centers and we were able to reclaim 30% of this idle memory, while not impacting the performance. This results in 3x cheaper memory for our data centers. 98% of the applications spend only around 0.1% of their CPU on memory compression and decompression. Also the idle memory tracking on average takes less than 11% of a single logical CPU.

        The cost of proactive reclaim and idle memory tracking is reasonable for the data centers cost of ownership of memory, however, it imposes challenges for power constrained devices based on Android and Chrome OS. These devices run diverse applications e.g. Chrome OS can run Android and Linux in a VM. To that end, we are working on making idle memory tracking and proactive reclaim feasible for such devices. Henceforth, we are interested and would like to initiate discussion on making proactive reclaim useful for other use-cases as well.

        [1] Software-Defined Far Memory in Warehouse-Scale Computers, ACM ASPLOS 2019.

      • 10:45
        To be announced 45m
      • 11:30
        Break 30m
      • 12:00
        Efficient Userspace Optimistic Spinning Locks 45m

        The most commonly used simple locking functions provided by the pthread library are pthread_mutex and pthread_rwlock. They are sleeping locks and so do suffer from unpredictable wakeup latency limiting locking throughput.

        Userspace spinning locks can potentially offer better locking throughput, but they also suffer other drawbacks like lock holder preemption which will waste valuable CPU time for those lock spinning CPUs. Another spinning lock problem is contention on the lock cacheline when a large number of CPUs are spinning on it.

        This talk presents a hybrid spinning/sleeping lock where a lock waiter can choose to spin in userspace or in the kernel waiting for the lock holder to release the lock. While spinning in the kernel, the lock waiters will queue up so that only the one at the queue head will be spinning on the lock reducing lock cacheline contention. If the lock holder is not running, the kernel lock waiters will go to sleep too so as not to waste valuable CPU cycles. The state of kernel lock spinners will be reflected in the value of lock. Thus userspace spinners can
        monitor the lock state and determine the best way forward.

        This new type of hybrid spinning/sleeping locks combine the best attributes of sleeping and spinning locks. It is especially useful for applications that need to run on large NUMA systems where potentially a large number of CPUs may be pounding on a given lock.

      • 12:45
        To be announced 45m
      • 13:30
        Lunch 1h 30m
      • 15:00
        Writing A Kernel Driver in Rust 45m

        In recent years, Rust has become a serious candidate for various
        projects. Given it's strong typing and memory model it lends itself
        for software that would usually have been written in C.
        Linux kernel drivers have traditionally been written in C as well.
        In contrast to the core kernel they are usually less strictly reviewed
        and may have been written by people that do not necessarily have the
        required expertise to interface with the kernel.
        While Rust may not be the best choice for the core kernel it may provide
        a useful alternative for kernel drivers.
        In this talk I will present my efforts to port a small filesystem I have
        written and upstreamed last year to Rust. This is very much WIP so failure
        is very much an option.

      • 15:45
        Linux Gen-Z Sub-system 45m

        Gen-Z Linux Sub-system

        Discuss design choices for a Gen-Z kernel sub-system and the challenges of supporting the Gen-Z interconnect in Linux.

        Gen-Z is a fabric interconnect that connects a broad range of devices from CPUs, memory, I/O, and switches to other computers and all of their devices. It scales from two components in an enclosure to an exascale mesh. The Gen-Z consortium has over 70 member companies and the first version of the specification was published in 2018. Past history for new interconnects suggests we will see actual hardware products two years after the first specification - in 2020. We propose to add support for a Gen-Z kernel sub-system, a Gen-Z component device driver environment, and user space management applications.

        A Gen-Z sub-system needs support for these Gen-Z features:

        • Registration and enumeration services that are similar to existing
          sub-systems like PCI.
        • Gen-Z Memory Management Unit (ZMMU) provides memory mapping and access to fabric addresses. The Gen-Z sub-system can provide services to track PTE entries for the two types of ZMMU's in the specification: page grid and page table based.
        • Region Keys (R-Keys) - Each ZMMU page can have R-Keys used to validate page access authorization. The Gen-Z sub-system needs to provide APIs for tracking, freeing, and validating R-Keys.
        • Process Address Space Identifier (PASID) - ZMMU requester and responder Page Table Entries (PTEs) contain a PASID. The Gen-Z sub-system needs to provide APIs for tracking PASIDs.
        • Data mover - Transmit and receive data movers are optional elements in bridges and other Gen-Z components. The Gen-Z sub-system can provide a user space interface to a RDMA driver that uses a Gen-Z data mover. For example, a libfabric Gen-Z provider implementation can use a RDMA driver to access data mover queues.
        • UUIDs - Components are identified by UUIDs. The Gen-Z sub-system provides interfaces for tracking UUIDs of local and remote components. A Gen-Z driver binds to a UUID similarly to how a PCI driver binds to a vendor/device id.
        • Interrupt handling - Interrupt request packets in Gen-Z trigger local interrupts. Local components such as bridges and data movers can also be sources of interrupts.

        We will discuss our proposed design for the Gen-Z sub-system illustrated in the following block diagram:

        Gen-Z Sub-system Block Diagram

        Gen-Z fabric management is global to the fabric. The operating system may not know what components on the fabric are assigned to it; the fabric manager decides which components belong to the operating system. Although user space discovery/management is unusual for Linux, it will allow the Gen-Z sub-system to focus on the mechanism of component management rather than the policy choices a fabric manager must make.

        To support user space discovery/management, the Gen-Z sub-system needs interfaces for management services:

        • Fabric managers need read/write access to component control space in order to do fabric discovery and configuration. We propose using /sys files for each control structure and table.
        • User space Gen-Z managers need notification of management events/interrupts from the Gen-Z fabric. We propose using poll on the bridges' device files to communicate events.
        • Local management services pass fabric discovery events from user space to the kernel. Our proposed design uses generic Netlink messages for communication of these component add/remove/modify events.

        We are leveraging our experience with writing Linux bridge drivers for three different Gen-Z hardware bridges in the design of the Gen-Z Linux sub-system. Most recently, we wrote the DOE Exa-scale PathForward project's bridge driver with data movers (https://github.com/HewlettPackard/zhpe-driver). We wrote drivers for the Gen-Z Consortium's demonstration card that supports a block device and a NIC as well as a driver for the bridge in HPE's "The Machine" that is a precursor to Gen-Z.

        From our work so far, here are questions we would like feedback on:

        • We intend to expose control space in /sys so that user space fabric managers can work. We ask for feedback on the proposed hierarchy and mechanisms.
        • Gen-Z uses PASIDs and the sub-system could use generic PASID
          interfaces. Any interest in this elsewhere in the kernel?
        • We have need of generic IOMMU interfaces since Gen-Z ZMMU needs to interface with the IOMMU in a platform independent way. Any interest in this elsewhere in the kernel? We saw some patch sets along these lines.
        • We intend to use generic NetLink for communication between user space and the kernel. Any thoughts on that decision?
        • Gen-Z maps huge address spaces from remote components, and to get good performance those mappings need huge pages. Currently, the kernel does not support this use case. We would like to discuss how best to handle these huge mappings.
        • We wrote a parser for the Gen-Z specification's control structure that generates C structures with bitfields. In general, we know the Linux kernel frowns on bitfields. Are bitfields ok in this context?
      • 16:30
        Break 30m
      • 17:00
        To be announced 45m
    • 10:00 17:45
      Networking Summit Track
      • 11:30
        Break 30m
      • 13:30
        Lunch 1h 30m
      • 16:30
        Break 30m
    • 10:00 13:30
      RDMA MC

      Following the success of the past 3 years at LPC, we would like to see a 4th RDMA (Remote Direct Memory Access networking) microconference this year. The meetings in the last conferences have seen significant improvements to the RDMA subsystem merged over the years: new user API, container support, testability/syzkaller, system bootup, Soft iWarp, etc.

      In Vancouver, the RDMA track hosted some core kernel discussions on get_user_pages that is starting to see its solution merged. We expect that again RDMA will be the natural microconf to hold these quasi-mm discussions at LPC.

      This year there remain difficult open issues that need resolution:

      RDMA and PCI peer to peer for GPU and NVMe applications, including HMM and DMABUF topics
      RDMA and DAX (carry over from LSF/MM)
      Final pieces to complete the container work
      Contiguous system memory allocations for userspace (unresolved from 2017)
      Shared protection domains and memory registrations
      NVMe offload
      Integration of HMM and ODP
      And several new developing areas of interest:

      Multi-vendor virtualized 'virtio' RDMA
      Non-standard driver features and their impact on the design of the subsystem
      Encrypted RDMA traffic
      Rework and simplification of the driver API
      Previous years:
      2018, 2017: 2nd RDMA mini-summit summary, and 2016: 1st RDMA mini-summit summary

      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Leon Romanovsky leon@leon.nu, Jason Gunthorpe jgg@mellanox.com

    • 10:00 13:30
      RISC-V MC

      The Linux Plumbers 2019 RISC-V MC will continue the trend established in 2018 [2] to address different relevant problems in RISC-V Linux land.

      The overall progress in RISC-V software ecosystem since last year has been really impressive. To continue the similar growth, RISC-V track at Plumbers will focus on finding solutions and discussing ideas that require kernel changes. This will also result in a significant increase in active developer participation in code review/patch submissions which will definitely lead to a better and more stable kernel for RISC-V.

      Expected topics
      RISC-V Platform Specification Progress, including some extensions such as power management - Palmer Dabbelt
      Fixing the Linux boot process in RISC-V (RISC-V now has better support for open source boot loaders like U-Boot and coreboot compared to last year. As a result of this developers can use the same boot loaders to boot Linux on RISC-V as they do in other architectures, but there's more work to be done) - Atish Patra
      RISC-V hypervisor emulation [5] - Alistair Francis
      RISC-V hypervisor implementation - Anup Patel
      NOMMU Linux for RISC-V - Damien Le Moal
      More to be added based on CfP for this microconference

      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Atish Patra (atish.patra@wdc.com) or Palmer Dabbelt (palmer@dabbelt.com)

    • 10:00 13:30
      Tracing MC

      The Linux Plumbers 2019 is pleased to welcome the Tracing microconference again this year. Tracing is once again picking up in activity. New and exciting topics are emerging.

      There is a broad list of ways to perform Tracing in Linux. From the original mainline Linux tracer, Ftrace, to profiling tools like perf, more complex customized tracing like BPF and out of tree tracers like LTTng, systemtap and Dtrace. Come and join us and not only learn but help direct the future progress of tracing inside the Linux kernel and beyond!

      Expected topics
      bpf tracing – Anything to do with BPF and tracing combined
      libtrace – Making libraries from our tools
      Packaging – Packaging these libraries
      babeltrace – Anything that we need to do to get all tracers talking to each other
      Those pesky tracepoints – How to get what we want from places where trace events are taboo
      Changing tracepoints – Without breaking userspace
      Function tracing – Modification of current implementation
      Rewriting of the Function Graph tracer – Can kretprobes and function graph tracer merge as one
      Histogram and synthetic tracepoints – Making a better interface that is more intuitive to use
      More to be added based on CfP for this microconference

      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC lead
      Steven Rostedt (rostedt@goodmis.org)

    • 15:00 18:30
      BPF MC

      After having run a standalone BPF microconference for the first time in last year's [0] [1] [2] Linux Plumbers conference, we've been overwhelmed with throughout positive feedback. We received more submissions than we could have accommodated for the one-day slot, and the room at the conference venue was fully packed despite the fact that the networking track had about half of their submissions with BPF related topics as well.

      We would like to continue on this success by organizing a BPF micro conference also for 2019. The microconference is aiming to catch BPF related kernel topics mainly in BPF core area as well as having focused discussions in specific subsystems (tracing, security,
      networking) with short 1-2 slides in order to get BPF developers together in a face to face working meetup for tackling and hashing out unresolved issues and discussing new ideas.

      Expected audience
      Folks knowledgeable with BPF that work in core areas or in subsystems making use of BPF.

      Expected topics
      libbpf, loader unification
      Standardized BPF ELF format
      Multi-object semantics and linker-style logic for BPF loaders
      Improving verifier scalability to 1 million instructions
      Sleep-able bpf programs
      State on BPF loop support
      Proper String support in BPF
      Indirect calls in BPF
      BPF timers
      BPF type format (BTF)
      Unprivileged BPF
      BTF of vmlinux
      BTF annotated raw_tracepoints
      BPF (k)litmus support
      bpftool
      LLVM BPF backend
      JITs and BPF offloading
      More to be added based on CfP for this microconference

      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Alexei Starovoitov alex8star@yahoo.com and Daniel Borkmann daniel@covalent.io

      [0] https://linuxplumbersconf.org/event/2/sessions/16/#20181115
      [1] https://lwn.net/Articles/773198/
      [2] https://lwn.net/Articles/773605/

    • 15:00 18:30
      Live Patching MC

      The main purpose of the Linux Plumbers 2019 Live Patching microconference is to involve all stakeholders in open discussion about remaining issues that need to be solved in order to make live patching of the Linux kernel and the Linux userspace live patching feature complete.

      The intention is to mainly focus on the features that have been proposed (some even with a preliminary implementation), but not yet finished, with the ultimate goal of sorting out the remaining issues.

      This proposal follows up on the history of past LPC live patching microconferences that have been very useful and pushed the development forward a lot.

      Currently proposed discussion/presentation topic proposals (we've not gone through "internal selection process yet") with tentatively confirmed attendance:

      5 min Intro - What happened in kernel live patching over the last year
      API for state changes made by callbacks [1][2]
      source-based livepatch creation tooling [3][4]
      klp-convert [5][6]
      livepatch developers guide
      userspace live patching
      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Jiri Kosina jkosina@suse.cz and Josh Poimboeuf jpoimboe@redhat.com

    • 15:00 18:30
      Testing and Fuzzing MC

      The Linux Plumbers 2019 Testing and Fuzzing track focuses on advancing the current state of testing of the Linux Kernel.

      Potential topics:

      Defragmentation of testing infrastructure: how can we combine testing infrastructure to avoid duplication.
      Better sanitizers: Tag-based KASAN, making KTSAN usable, etc.
      Better hardware testing, hardware sanitizers.
      Are fuzzers "solved"?
      Improving real-time testing.
      Using Clang for better testing coverage.
      Unit test framework. Content will most likely depend on the state of the patch series closer to the event.
      Future improvement for KernelCI. Bringing in functional tests? Improving the underlying infrastructure?
      Making KMSAN/KTSAN more usable.
      KASAN work in progress
      Syzkaller (+ fuzzing hardware interfaces)
      Stable tree (functional) testing
      KernelCI (autobisect + new testing suites + functional testing)
      Kernel selftests
      Smatch
      Our objective is to gather leading developers of the kernel and it’s related testing infrastructure and utilities in an attempt to advance the state of the various utilities in use (and possibly unify some of them), and the overall testing infrastructure of the kernel. We are hopeful that we could build on the experience of the participants of this MC to create solid plans for the upcoming year.

      If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.

      MC leads
      Sasha Levin levinsasha928@gmail.com and Dhaval Giani dhaval.giani@gmail.com

    • 18:45 19:45
      Closing Plenary 1h
    • 20:00 23:00
      Closing Party 3h
Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×