Articles

  • Avatars, identicons, and hash visualization

    Oftentimes we have a situation where we have a situation where we want an easy way to distinguish people of which we only know the name or a nickname. Probably the most common examples of these are distinguishing different people from each other on various discussion platforms, like chat rooms, forums, wikis, or issue tracking systems. These very often provide the possibility to use handles, often real names or nicknames, and profile images as an alternative. But as very often people don’t provide any profile image, many services use a programmatically generated images to give some uniqueness to default profile images.

    In this article I’ll take a look at a few approaches to generate default profile images and other object identifiers. I also take a look at the process of creating some specific hash visualization algorithms that you may have encountered on the web. The general name for such programmatically generated images is hash visualization.

    Use cases and background

    A starting point to visualize arbitrary data is often hashing. Hashing in this context is to make a fixed length representation of an arbitrary value. This fixed length representation is just a big number that can be then used as a starting point for distinguishing different things from each other.

    Hash visualization aims to generate a visual identifier for easy differentiation of different objects that takes an advantage of preattentive processing of human visual system. There are various categories that human visual system uses to do preattentive processing, like form, color, motion, and spatial position. Different hash visualization schemes knowingly or unknowingly take advantage of these.

    Simple methods use specific colors as a visualization method, but at the same time limit the values that can be preattentively distinguished to maybe around 10-20 (around 4 bits of information) per colored area. Using additional graphical elements provides a possibility to create more distinct unique elements that are easy to separate from each other. One thesis tries hash visualization with 60 bits of data with varying success. So the amount of visualized data that can be easily distinguished from each other by graphical means is somewhere between 4 and 60 bits.

    Avatars

    Perhaps the most common way to distinguish different users of an Internet based service is to use an user name and an avatar image. Avatars often go with custom avatar images that users themselves upload to the service. In case the service itself does not want to host avatars, it can link to services like Gravatar that provides avatar hosting as a service. This way user can use the same avatar in multiple services just by providing their email address.

    But what about a situation where the user has not uploaded or does not want to upload an avatar image to the service? There usually is a placeholder image that indicates that the user has not set their custom avatar.

    There seem to be three major approaches to generate this placeholder image:

    1. A generic dummy placeholder image. These often come in a shape of a generic human profile.
    2. An image from predefined collection of images. These are usually related to a theme that the site wants to present.
    3. A partially or fully algorithmically generated image. These usually make it possible to get the most variety for placeholders with the least amount of work.

    Algorithmically generated images vary greatly and usually come in three varieties:

    1. Simple images that mainly use a color and a letter to distinguish users.
    2. Images that consist of a set of a predefined parts and color variations.
    3. More complex shapes usually created fully algorithmically.

    I have taken a closer look at different algorithmic avatar creation methods in form of WordPress identicon, GitHub identicon, and MonsterID. These provide an overview on how algorithmic avatar generation can work. There is also a chapter of using a character on a colored background to give an example of a simple algorithmic method for default avatar generation.

    I want to focus here on visualizing the few methods that I have encountered rather than having a all-encompassing overview of avatar generation. For that, there is Avatar article on Wikipedia that gives a better textual overview of these and many more methods than I could give.

    Character on a colored background

    Avatar examples having a colored background with the first letter of an user name.
    Figure 1: Avatar examples having a colored background with the first letter of an user name.

    Services like Google Hangouts, Telegram and Signal, and software with people collaboration functionality like Jira generate a default avatar for an user by using the first letter of user’s name with a colored background. This is simple method where you need to select a font, a shape that you want to use to surround the letter, and size of the avatar. Some example avatars generated by this method are visible from figure 1.

    WordPress identicon

    Example WordPress identicons.
    Figure 2: Example WordPress identicons.

    WordPress identicon has been a well known algorithmically generated avatar for quite long time on web forums. It has been used to distinguish between users based on their IP addresses and email addresses. Example WordPress identicons can be seen from figure 2 where you should be able to notice that there is some symmetry in these images.

    44 parts from which a WordPress identicon is generated.
    Figure 3: 44 parts from which a WordPress identicon is generated.

    Looking at the code how these avatars are generated reveals 44 patterns, shown in figure 3 that the generators uses to generate these images. They are symmetrically placed around the center and rotated 90 degrees when they reach the next quadrant. This process is illustrated in figure 4 that shows how these parts are placed and rotated around the center. In case the requested identicon has an even number of parts, these parts will get duplicated. So 4x4 WordPress identicon will only have 3 distinct parts in it. This is the same number as for 3x3 WordPress identicon.

    5x5 WordPress identicon generation animated.
    Figure 4: 5x5 WordPress identicon generation animated.

    GitHub identicon

    Example GitHub style identicons.
    Figure 5: Example GitHub style identicons.

    GitHub is a great collaborative code repository sharing service where users can have identifying images. If user, however, does not have any profile image, GitHub will generate an identicon that forms a 5x5 pixel sprite. Figure 5 examples how such identicons can look like. The exact algorithm that GitHub uses to generate these identicons is not public, so these example images just imitating the style of GitHub identicons.

    GitHub style identicon creation process animated. Visible pixels are horizontally mirrored.
    Figure 6: GitHub style identicon creation process animated. Visible pixels are horizontally mirrored.

    Figure 6 shows the process of creating such identicon. Basically the given data is hashed and from that hash a specific color is selected as the value for a visible pixel. Then other part of the hash (15 bits) is used to form an image that is horizontally mirrored. I have used a modified ruby_identicon to generate these images and animations. This library also enables the generation of other sizes than 5x5 sprites.

    MonsterID

    Example artistic MosterIDs.
    Figure 7: Example artistic MonsterIDs.

    WordPress MonsterID is a hash visualization method where hashes are converted to various types of monsters. It’s a one type of hash visualization method where lifelike creature is created from predefined body parts. WordPress MonsterID actually offers two types of monsters, the default ones and artistic ones. Here I’m showing some example artistic monsters in figure 7 just because they look better than the default ones.

    Creating an artistic MonsterID image by selecting and colorizing an appropriate part from each category.
    Figure 8: Creating an artistic MonsterID image by selecting and colorizing an appropriate part from each category.

    Figure 8 shows the MonsterID creation process. First there is a lightly colored background on top of which we start forming the monster. We select a body part in order of legs, hair, arms, body, eyes, and mouth. These parts are show in figure 9. Then assign a color to the selected part, unless it’s one of the special cases. Then just overlay it on top of the image formed from previous parts. There are special cases that what parts should have a predefined color or be left uncolored, but that does not change the general monster creation process.

    Parts from which a MonsterID is created divided into categories (hair, eyes, mouth, arms, body, and legs).
    Figure 9: Parts from which a MonsterID is created divided into categories (hair, eyes, mouth, arms, body, and legs).

    In addition to WordPress MonsterID plugin, Gravatar also supports old style MonsterIDs. These unfortunately do not as good as the artistic monsters in the WordPress MonsterID plugin and there is no option to select an alternative form for these avatars.

    OpenSSH visual host key

    Hash visualization has also its place in cryptography. OpenSSH client has an option to display so called visual host keys.

    When a SSH client connects to a remote server that it has not ever seen before, it needs to accept the public key of this server so that it can communicate with it securely. To prevent man-in-the-middle attacks, user initiating the connection is supposed to verify that the host key indeed matches. An example message about host key verification request is shown in figure 10. The idea is, that either you have seen the this key beforehand or you have some other means to get this key fingerprint and verify that it indeed belongs to the server you are trying to connect to. How this works in practice or not, is a completely different discussion.

    $ ssh example.com
    The authenticity of host 'example.com (127.1.2.3)' can't be established.
    RSA key fingerprint is SHA256:Cy86563vH6bGaY/jcsIsikpYOHvxZe/MVLJTcEQA3IU.
    Are you sure you want to continue connecting (yes/no)?
    Figure 10: The default SSH server key verification request on the first connection.

    As you can see, the key fingerprint is not easy to remember and also the comparison of two fingerprints can be quite hairy. When you enable visual host key support, as show in figure 11, the idea is that you can quickly glance to figure out if the key of a server is different than expected. This should not be treated as a method to see if two keys are equal, as different keys can produce the same image.

    $ ssh -o VisualHostKey=true example.com
    The authenticity of host 'example.com (127.1.2.3)' can't be established.
    RSA key fingerprint is SHA256:Cy86563vH6bGaY/jcsIsikpYOHvxZe/MVLJTcEQA3IU.
    +---[RSA 8192]----+
    |     ..o.=+      |
    |      . E.       |
    |        . .      |
    | .       o       |
    |o o   + S o      |
    |.+ o o + *       |
    |o.. .o..B.o      |
    |.o  o.*BB= .     |
    |+ ...=o@@+o      |
    +----[SHA-256]----+
    Are you sure you want to continue connecting (yes/no)?
    Figure 11: Visual host key enabled SSH server key verification request on the first connection.

    The algorithm to generate these visual host keys is visualized at figure 12. It’s also described in detail in an article that analyzes the algorithmic security of this visual host key generation: The drunken bishop: An analysis of the OpenSSH fingerprint visualization algorithm.

    OpenSSH’s visual host key is formed in such way that the formation starts from the center of the image. Then 2 bits of the host key fingerprint are selected and the location is moved in a diagonal direction. The visual element on the plane where this movement happens is changed based on how many times a certain location is visited. This can be better seen from the animation where each visual host key generation step is visualized as its own frame and where the bits related to the movement are highlighted.

    OpenSSH visual host key generation animated.
    Figure 12: OpenSSH visual host key generation animated.

    The exact details how the key is generated can be seen from OpenSSH’s source code. This source code also refers to a paper called Hash Visualization: a New Technique to improve Real-World Security. But the fact that OpenSSH is usually limited to terminals with text interface makes it quite hard to use the more advanced Random Art methods described in the paper. But the idea for host key verification by using images has at least taken one form in the OpenSSH world.

    Conclusions

    I presented here some methods that I have encountered over the years on various sites and programs. I can not possibly cover all of them, as there likely are as many algorithms to create avatars as there are people inventing them. The biggest question with any of these algorithms is that how many bits they should try to visualize and what types of graphical elements they can use in the visualization without just creating indistinguishable clutter.

  • Taking a look at python-afl

    I have been using american fuzzy lop to fuzz various C and C++ programs and libraries. It is a wonderfully fast fuzzer that is generally easy to get started with and it usually finds new bugs in programs that have not been fuzzed previously. Support for american fuzzy lop instrumentation has also been added for other languages and I decided to try out how it works with Python. More specifically with the reference CPython implementation of it.

    Fuzzing Python programs with american fuzzy lop

    American fuzzy lop generally works by running a program that is compiled with american fuzzy lop instrumentation built in. It executes the program with afl-fuzz command that modifies the input data that is fed to the program, monitors how the program behaves, and registers everything that causes abnormal program behavior. This works well for natively compiled programs, but causes various issues with interpreted programs.

    Python is by default an interpreted language, so to execute Python programs, you need to start a Python interpreter before executing your code. This means that if you would instrument the Python interpreter with american fuzzy lop instrumentation and run the interpreter with afl-fuzz, it would mostly fuzz the inner workings of the interpreter, not the actual Python program.

    Fortunately there is python-afl module that enables american fuzzy lop instrumentation for just the Python code instead of instrumenting the Python interpreter. In native programs american fuzzy lop compiler wrapper (afl-gcc, afl-clang, afl-clang-fast) adds the necessary instrumentation and the connection to afl-fuzz. Python-afl is, however, designed in such way that it doesn’t try to wrap the whole program, but requires you to create a wrapper module that initializes fuzzing.

    The most simple way to wrap a Python program with python-afl is to initialize python-afl and then run the program (assuming that main() function exists):

    import afl
    
    afl.init()
    import fuzzable_module
    fuzzable_module.main()
    

    This script, saved to fuzz-wrapper.py, can be then run with py-afl-fuzz command that wraps afl-fuzz for Python programs:

    py-afl-fuzz -m 400 -i initial-inputs/ -o fuzzing-results/ -- \
        python fuzz-wrapper.py @@
    

    More details about these command line switches can be found from AFL readme file. This then brings out the famous american fuzzy lop status screen, but now for Python programs:

    afl-fuzz status screen with python-afl. Yes, I use white background.

    Figure 1: afl-fuzz status screen with python-afl. Yes, I use white background.

    Next sections will explain in more details how to make fuzzing these programs more efficient and what pitfalls there could be in Python programs from fuzzing efficiency point of view.

    Afl-fuzz modes and their python-afl equivalents

    Generally afl-fuzz provides 4 fuzzing modes that differ in how the program execution between different fuzzing inputs behaves:

    • Dumb mode that just executes the program by doing fork() and execv(). This is the slowest mode that does not rely on any fancy tricks to speed up program execution and also does not provide any insights how the program behaves with different inputs.
    • Basic fork server mode where the fuzzed binary does all the initialization steps that happen before calling the main() function and then program is repeatedly forked from that point on. This also includes instrumentation that is compiled in to the program so there already is some insight on what is happening inside the program when a specific input is processed. There exists QEMU mode for afl-fuzz that technically enables fork server mode for uninstrumented binaries, but with some performance penalty.
    • Deferred instrumentation that works in similar fashion as the basic fork server mode. Instead forking just before calling main() function, this enables to move the fork point further down the line and enables heavy program initialization steps to be avoided if they can be executed independently of the input.
    • Persistent mode where the fuzzable part of the program is repeatedly executed without resetting the program memory every time the program is called. This only works in practice if the program does not have a modifiable global state that can not be reset to the previous state.

    Afl-fuzz generates new inputs and analyzes the program execution results roughly at the same speed regardless of the mode. So these modes are in the order of efficiency in a sense that how much overhead there is for fuzzing one input. They are also in the order of complexity on how easy they are to integrate into an existing program that has not been made to be fuzzed. Especially as the fastest modes require clang to be available as a compiler and the fuzzable program needs to be able to be compiled and linked with it.

    Python-afl, fortunately, provides equivalent modes without having to use special tools. These are also very fast to try out, as you don’t need to compile Python programs from scratch.

    The dumb mode would be just equivalent of running Python interpreter directly with afl-fuzz without any instrumentation, so we will skip over it. The more interesting part is to use the deferred instrumentation. The code in the introductory section called afl.init() before the fuzzable module was imported. This is the most safe approach, as the fuzz target might do something with the input at import time. But more realistically, Python programs generally only call import statements, possibly conditionally, during the start-up and don’t handle any user provided data yet. So in this case, we can do imports first and move the afl.init() function just before where the actual work happens:

    import afl, fuzzable_module
    
    afl.init()
    fuzzable_module.main()
    

    We can gain some speed-ups with this by calling os._exit() function instead of letting Python to exit in the usual fashion where all the destructors and other functions that are called at exit:

    import afl, fuzzable_module, os
    
    afl.init()
    fuzzable_module.main()
    os._exit(0)
    

    Previous examples assume that the input file generated by the fuzzer comes as the first parameter on the command line. This is quite a good assumption, as many data processing modules for Python include a command line interface where they read and process files given on the command line. But if we can directly call the data processing function, we can instead use the standard input to feed the data:

    import afl, fuzzable_module, os, sys
    
    afl.init()
    fuzzable_module.process_data(sys.stdin)
    os._exit(0)
    

    With Python 3 comes additional complexity. Python 3 processes the standard input using the encoding specified in the environment. Often in Unix environments it is UTF-8. As afl-fuzz mostly does bit manipulation, input is going to end up with broken UTF-8 data and results in exception when reading from the standard input file object. To work around this, you can use sys.stdin.buffer instead of sys.stdin in Python 3 based programs. Or create a shim that always results in raw bytes:

    import afl, fuzzable_module, os, sys
    
    try:
        # Python 3:
        stdin_compat = sys.stdin.buffer
    except AttributeError:
        # There is no buffer attribute in Python 2:
        stdin_compat = sys.stdin
    
    afl.init()
    fuzzable_module.process_data(stdin_compat)
    os._exit(0)
    

    The fastest persistent mode requires that the program should not have a global state where the previous program execution affects the next one. There unfortunately is a surprising amount of global state in Python programs. It is not that uncommon to initialize some specific variables only during the program execution and then re-use the results later. This usually is harmless, but negatively affects the program stability that afl-fuzz is going to show in its status screen.

    Persistent mode code for a fuzzable program could look like following including the learnings from the deferred instrumentation and its speedups:

    import afl, fuzzable_module, os
    
    try:
        # Python 3:
        stdin_compat = sys.stdin.buffer
    except AttributeError:
        # There is no buffer attribute in Python 2:
        stdin_compat = sys.stdin
    
    while afl.loop(10000):
        fuzzable_module.process_data(stdin_compat)
    os._exit(0)
    

    Benchmarking different afl-fuzz modes

    I wanted to measure how these different afl-fuzz modes behave with Python. So I created a small fuzz target whose main algorithm does some conditional computation based on the input data and prints out the result:

    def fuzz_one(stdin, values):
        data = stdin.read(128)
        total = 0
        for key in data:
            # This only includes lowercase ASCII letters:
            if key not in values:
                continue
            value = values[key]
            if value % 5 == 0:
                total += value * 5
                total += ord(key)
            elif value % 3 == 0:
                total += value * 3
                total += ord(key)
            elif value % 2 == 0:
                total += value * 2
                total += ord(key)
            else:
                total += value + ord(key)
        print(total)
    

    This is just to exercise the fuzzer a little bit more than a trivial function that does nothing would do. I also created an equivalent fuzz target in C++ to give numbers to compare what kind of an overhead different fuzzing modes incur for both Python and native applications. The approximate results are summarized in table 1. The actual scripts used to generate this data are available from following links: measure-times.sh, target-simple.template.py, and target-simple.cpp.

     Python 2Python 3Native
    dumb mode110/s47/s1200/s
    pre-init130/s46/s5800/s
    deferred560/s260/s6800/s
    quick exit2700/s2100/s8700/s
    persistent mode17000/s15000/s44000/s
    Table 1: afl-fuzz benchmarks for various fuzzing modes for Python 2, Python 3, and for C++ versions of the example fuzz target.

    What these results show, it is possible to make fuzzable program in Python in such way that the fuzzing start-up overhead is only three to four times larger than for a native one with afl-fuzz. This is an excellent result considering that generally algorithms implemented in Python can be considered 10-100 times slower than ones implemented in C family languages. But if you want to use Python for performance critical tasks, you are anyways using Cython or write performance critical parts in C.

    There is a clear performance difference between Python 2.7.14 and Python 3.6.4 especially when all start-up and exit optimization tricks are not in use. This difference is also visible in Python start-up benchmarks at speed.python.org. This difference gets smaller when the persistent mode is used as Python executable is not shut down immediately after processing one input. What can also help Python 3 in the persistent fuzzing mode is the fact that the tracing function that python-afl sets with sys.settrace() is called only half as often with Python 3 is it is called with Python 2 for this fuzz target.

    More speed for repeated Python executions

    Python enables imports and other type of code loading at any phase of the program execution. This makes it quite possible that program has not fully loaded before it is executed for the first time. This is usually used as a configuration mechanism to configure the program based on the runtime environment and other type of configuration. It also provides the possibility to write plugins, similarly to what dynamically loaded libraries provide.

    You can see from table 1 how the fuzzing overhead decreases a lot when the fuzzing start point moved after import statements for this simple example program. So when I was fuzzing an old version of flake8, I tried to see if it had any hidden configuration statements that would execute and cache their results for repeated calls. And it did!

    Initially I used following type of fuzzing wrapper for flake8:

    import afl, sys, flake8.run
    afl.init()
    flake8.run.check_code(sys.stdin.read())
    

    It is basically a simple wrapper that imports all what it needs and then fuzzes what is fed from the standard input to the program. But the performance of this was horrible, around 15 executions/second. So I tried to see what happens when I change the code a little bit by calling the flake8.run.check_code() function with an empty string before setting the fuzzing starting point:

    import afl, sys, flake8.run
    # Make sure that the hidden runtime configuration is executed:
    flake8.run.check_code("")
    afl.init()
    flake8.run.check_code(sys.stdin.read())
    

    This doubled the execution speed to around 30 executions/second. It is still quite slow for the small inputs that afl-fuzz initially creates, but an improvement nonetheless. I looked at what flake8 does when it is executed and a following line popped up:

    from pkg_resources import iter_entry_points
    

    There basically is a hidden import statement in the execution path whose result is cached after the first time it is encountered. Also this pkg_resources.iter_entry_points() function is used to configure the program at runtime and that also adds some extra overhead to the process.

    Flake8 also by default tries to execute checks in parallel with multiprocessing module. This might be a good idea when you have multiple files to verify at once, but during fuzzing it just adds unneeded overhead. Also the fact that it starts a new process makes the fuzzer lose all information about what is happening in the subprocess. Fortunately in flake8 it was possible to override the detected multiprocessing support by just setting one variable into False and then flake8 will act as there is no multiprocessing support. This increased the average fuzzing speed of flake8 by threefold.

    The final speedup with flake8 came when looking at how flake8 is constructed. It is basically a wrapper around mccabe, pycodestyle, and pyflakes packages. So rather than fuzz flake8, it is much more productive to create a fuzz target for each one of those packages individually. I did this for pycodestyle and ended up finally executing it with around 420 executions/second for trivial data and around 200 executions/second for more realistic. So basic recommendations on how to fuzz native programs also apply for Python.

    Monkey patching around fuzzing barriers

    As american fuzzy lop does not know almost anything about the input, it can encounter various impediments (see section 13 from afl’s README.txt) when the file format includes any values that depend on the previously encountered data. This is especially problematic with checksums and is also an issue with other mutation based fuzzers. To work around this the C world, you can generally use C preprocessor to turn off checksum verification when such thing is encountered. This also applies for all other types of barriers that might skew fuzzing results, like random number usage.

    Unfortunately Python does not have preprocessor support by default, so this type of conditional compiling is out of the question. Fortunately Python provides the possibility to do monkey patching where you can replace functions or methods at runtime. So to make a library more fuzzer friendly, you can monkey patch all functions related to data verification to always return True, or return some constant value when checksums are considered.

    I used this approach to fuzz python-evxt 0.6.1 library. Python-evxt is a library to parse Windows event log files and is one of the first hits when you search Python Package Index with “pure python parser” keyword. The file format includes CRC32 checksum that will prevent the fuzzer from being able to meaningfully modify the input file, as almost all modifications will create an incorrect checksum in the file.

    To monkey patch around this issue, I searched the source code for all functions that potentially have anything to do with checksum generation and made them always to return a constant value:

    import Evtx.Evtx
    
    checksum_patches = (
        (Evtx.Evtx.ChunkHeader, "calculate_header_checksum"),
        (Evtx.Evtx.ChunkHeader, "header_checksum"),
        (Evtx.Evtx.ChunkHeader, "calculate_data_checksum"),
        (Evtx.Evtx.ChunkHeader, "data_checksum"),
        (Evtx.Evtx.FileHeader, "checksum"),
        (Evtx.Evtx.FileHeader, "calculate_checksum"),
    )
    
    for class_obj, method_name in checksum_patches:
        setattr(class_obj, method_name, lambda *args, **kw: 0)
    

    The checksum_patches variable holds all the functions that you need to overwrite to ignore checksums. You can use setattr() to overwrite these methods on class level with an anonymous function lambda *args, **kw: 0 that always returns 0 and takes any arguments. Taking any number of arguments in any fashion is enabled with *args and **kw and this syntax is explained in keyword arguments section on Python’s control flow tools manual.

    Takeaways

    I was impressed on how low fuzzing overhead Python programs can have when compared to C family languages. When looking at how python-afl is implemented, it becomes quite clear that the deferred mode of afl-fuzz has a great part in this where the whole Python environment and program initialization is skipped between different fuzzing runs.

    Making a Python module fuzzable was also more easy than in C family languages. Although american fuzzy lop is already the easiest fuzzing engine to get started with, the fact that it uses clang to do the more advanced tricks often gives headaches when trying to get software to compile. The fact that I did not have to use any modified Python interpreter to get started but only had to import afl module made me to realize how many steps I skipped that I normally have to do when using american fuzzy lop on new systems.

    Thanks to Python’s dynamic execution and monkey patching capabilities I could also try out fuzz target creation with external libraries without having to actually modify the original code. Especially selecting some specific functions to fuzz and overriding checksum generation would generally require nasty software patches with C family languages. Especially if there is main() function to override.

    It was also nice to realize that the os._exit(0) equivalent in standard C library function call _Exit(0) could help make light native fuzz targets even faster. The process cleanup in this relatively trivial C++ program adds almost 30% overhead for repeated program execution. Although it will likely break any sanitizer that does any verification work during the exit, like searching for dynamically allocated memory that was not freed.

  • Nanorepositories

    I recently encountered a microservice antipattern called nanoservice that is described in following manner:

    Nanoservice is an Anti-pattern where a service is too fine grained. Nanoservice is a service whose overhead (communications, maintenance etc.) out-weights its utility.

    I have encountered a lot of similar situations with source code repositories where different parts of a program or a system that was shipped as a single entity were divided into smaller repositories. The code in these repositories was not used by anything else than the product (the unit of release) that the code was part of.

    In the most extreme case there were thousands of smaller repositories. Many of those repositories were just holding less than 10 files in a deep directory hierarchy implementing some tiny functionality of the whole. Plus some boilerplate functionality for building and repository management that could just have been a couple of extra lines in a build system for a larger entity.

    In more common cases, one product consists of dozens of smaller repositories where one or two repositories get over 90% of the whole weekly commit traffic and other repositories just get one commit here and there. Or that there are multiple interlinked repositories (see an example in How many Git repos article) that depend on each other and very often all of them need to go through the same interface changes.

    Sometimes there also is a situation that all the work is done in smaller repositories and then there is one superproject[1, 2, 3] that is automatically updated when a commit happens in any of its child projects. So basically you have one big repository that just consists of pointers to smaller repositories and has one unneeded layer of indirection. Also instead of making one commit that would reveal integration problems immediately, you now need to have multiple commits to reveal these issues. With some extra unneeded delay.

    I would suggest calling these types of repositories nanorepositories. A nanorepository is a repository that holds a subsystem that is too limited to stand on its own and needs a bigger system to be part of. This bigger system usually is also the only system that is using this nanorepository. The nanorepository is also owned by the same organization as the larger entity it’s part of. Therefore it doesn’t give any advantages, for example, in access control. Nanorepositories can hold just couple of files, but they can also be relatively large applications that are, however, tightly coupled with the system they are part of.

    Downsides of premature repository division

    Nanorepositories are a case of premature optimization for code sharing where there is no real need for it. There are articles (Advantages of monolithic version control, On Monolithic Repositories) and presentations (Why Google Stores Billions of Lines of Code in a Single Repository, F8 2015 - Big Code: Developer Infrastructure at Facebook’s Scale) talking about the advantages of monolithic repositories, but those advantages can be hard to grasp without knowing the disadvantages of the other end.

    I’ll list some issues that I have encountered when working with independent repositories. These all lead to a situation where developers need to spend extra time in repository management that could be avoided by grouping all software components that form the final product into one repository.

    Expensive interface changes

    Interface changes between components become expensive. You also need cumbersome interface deprecation policies and a way to support the old and new interface versions between repositories until the change has propagated everywhere. It can take months or years to ensure that all interface users have done the intended interface change. And if you don’t have a good search at your disposal, you still can’t be sure about it before you really remove the old interface.

    With separate repositories it’s often the case that you can’t easily search where the interface you are deprecating is used at. This means that you don’t beforehand know what is actually depending on the interface. There naturally are search engines that span over multiple repositories, but they very rarely beat a simple grep -r (or git grep) command whose output you can further filter with simple command line tools. Especially if there are hundreds of small repositories that you need to include in your search.

    Ignore file duplication

    Often you need to add ignore files (.gitignore, .hgignore, etc…) to prevent junk going in to the repository by accident. Situations that can generate junk next to your code can be for example:

    • In-source build (versus separate build directories) generated files (*.a, *.o, *.exe, *.class, *.jar…).
    • Using any editor that creates backup and other files next to the file you are editing (*~, *.swp, *.bak…).
    • Using interpreted languages, like Python, whose default implementation byte compile the scripts for faster start-up (*.pyc, *.pyo…).
    • Using integrated development environments that require their own project directories.

    All these generic ignore rules need to be included in every project in addition to project specific ignores. Other possibility is forcing these ignore rules on developers themselves instead of taking care of them centrally. In case of nanorepositories there likely is just one or two languages used per repository, so the amount of ignore rules likely depends on the development environment that the developers work with. But it’s still needless duplication when you could get by without.

    Re-inventing inefficient build system rules

    Small repositories lead into having to reinvent build system rules for every repository from scratch if you want to test and build your component in isolation. Or doing a lot of code duplication or including references to a repository including common build rules. This also applies for test runners, as different levels of testing for different languages usually have their own test runners. Sometimes multiple test runners per language, that all have some non-default options that provide various advantages in test result reporting.

    Modern build systems like, ninja and Bazel, usually work on knowing the whole build graph of the system that they are trying to build. This makes it more easy to discover dependencies that only rebuild parts that are necessary to rebuild. Building every repository independently from each other leads into recursive build systems that treat their inputs as black boxes (Bitbake, npm, Maven, Make…). Changes in these black boxes are either communicated with version number changes or always rebuilding the component in question. This leads into a wasteful process and resource usage when compared to trunk based development of monolithic repositories.

    Overly complicated continuous integration machinery

    One of the defining principles of modern software development is having a working continuous integration system in place. This ensures that independent changes also work when they leave developer’s machine and also work when they are integrated with the rest of the product that the change is part of. And this is done multiple times every day. This, combined with trunk based development, keeps integration issues short (minutes to days) and avoids many-month release freezes compared to branched or forked development methods.

    Nanorepositories likely end up in a repository specific checks in the continuous integration machinery that only verify that the component in the repository itself works well. And if this continuous integration machinery has an automatic per repository check job generation, it likely needs to have an entry point (like make test or test.sh script) to execute those tests. And the same applies for compilation. Not to mention the extra work when trying to compile and test against different systems and runtime instrumentation (like AddressSanitizer).

    When the component finally gets integrated with everything else and the system breaks, figuring out the exact commit where the breakage happens (besides the integrating one) can be really painful. This is because it is easily possible to have dozens to thousands of commits between component releases. See a physical world example where components work perfectly together, but fail when integrated. And its hotfix.

    A case for small repositories

    Nanorepositories should not be confused with small independent repositories, as not everything needs to aim to be a part of a bigger product. A very common reason for small repositories is the combination of ownership management with shareable components. Most open source projects are such that they are owned by a certain people, or an organization, and it’s just not a good case for them to be part of anything bigger. Especially if they provide independent components that really are used by multiple external entities. Same downsides, however, generally apply to a collection of open source projects as to products consisting of multiple repositories.

  • Static code analysis and compiler warnings

    Compiler generated warnings are one form of static code analysis that provides a codified form of certain types of beneficial programming practices. Nowadays modern compilers used to compile C family languages (C, C++, and Objective-C) provide hundreds of different warnings whose usefulness varies depending on project and its aims.

    In this article I will examine what level of issues compiler warnings can find, what is the cost of enabling warnings and analyze compiler warning flag lists for both clang and GCC compilers.

    Levels of static code analysis

    Compiling C family languages usually involves preprocessor, compiler, assembler, and a linker. This also leads to situation that static code analysis can be done in various phases of program construction. These generally are:

    • Analysis on plain source files.
    • Analysis on preprocesses source files.
    • Analysis on compilation unit level.
    • Link-time analysis.

    This multi-stage program construction results in difficulties for tools that are not called with the exact same arguments providing information about preprocessor definitions, and include and library directories. For example tools like splint, Cppcheck, and many editor front-ends work outside the build system and can result in false warnings because they can not see inside some macro definitions that were not included in the simple static analysis setup. This becomes an issue with larger projects that do not necessarily have the most straightforward build setups and the most trivial header file inclusion policies. This does not mean that such tools are useless, but they will result in false positive warnings that can be really annoying unless they are silenced or ignored in some way.

    Analysis on preprocessed source files already provides pretty accurate picture of what kind of issues there can be in the program, but it necessarily is not enough. In the compilation phase compilers constantly transform the program into new, functionally equivalent, forms during optimization phases that can even result in unexpected code removal that is not necessarily trivial to notice. Compilation phase also gives more opportunities for target platform specific static code analysis. For example pipeline stalls or value overflows due to incorrect assumptions on data type sizes can usually be noticed only after the target platform is known.

    Final phase in program construction, that provides options for static analysis, is the linking phase. In the linking phase linker takes care that all the functions and global variables that the program calls come from somewhere and that there are no conflicting duplicate names defined. This should also enable some automatic detection capabilities for memory leaks and such that come from calling functions defined in different compilation units. I’m not sure if any freely available static analyzer does this.

    Compiler warning flags

    Compiler warning flags are one way to do static code analysis that cover all possible phases of program construction. This assumes that the compiler is involved in all phases of program construction. And they usually are, as in all phases from preprocessing to linking compiler front-end is used as a wrapper to all the tools that do the actual hard work.

    Warning flags and compilation time

    Using static code analysis in form of compiler warnings incurs some penalty, as they need to execute some extra code in addition to normal code related to compilation. To measure the penalty and to contrast it with some more advanced static analysis tools,

    I did some benchmarks by compiling Cppcheck 1.73 and FFTW 3.3.4 with clang 3.8, GCC 6.1, and Infer 0.8.1 by using -O3 optimization level. Cppcheck is a program mainly written in C++ and FFTW is mainly written in C. Infer has some experimental checks for C++ enabled with --cxx command line option, so I ran Infer twice for Cppcheck, with and without C++ checks. Clang had all warnings enabled -Weverything and GCC had all warning options that did not require any special values. This resulted in following minimum execution times of 3 runs:

    CompilerProgramNo warningsAll warnings
    clangCppcheck59.3 s1 min 1.1 s (+ 3.0 %)
    GCCCppcheck1 min 32.7 s1 min 38.8 s (+ 6.6 %)
    InferCppcheck-17 min 50 s (18x slower)
    Infer --cxxCppcheck-1 h 36 min (97x slower)
    clangFFTW40.5 s40.9 s (+ 1 %)
    GCCFFTW42.7 s58.1 s (+ 36 %)
    InferFFTW-4 min 43 s (10x slower)

    We can see that for clang and GCC the extra processing time added even by all warnings flags is pretty small compared to all the other compilation and optimization steps for a C++ application (Cppcheck). But for mostly C based application (FFTW) GCC gets surprisingly heavy, although build times still remain within the same order of magnitude.

    If we then compare the time that a more heavy static code analyzer takes, these compiler warnings are extremely cheap way to add static code analysis. They may not catch all the same bugs as these more advanced methods do, but they do offer a cheap way to avoid the basic mistakes.

    Warning flag lists

    I have created a project that can automatically parse compiler warning flags from command line option definition files in clang and GCC. This came partially from a necessity and partially from curiosity to examine what kind of options clang and GCC provide in easy to digest format. Although both compiler provide some kind of lists of warning flags as part of their documentation, they are pretty cumbersome to go through when the main interest is first figure what there is available and then just look at the details.

    Warning options and deprecation

    Different compilers have different policies about backwards compatibility and deprecation. When looking at how warning options have evolved, GCC has not removed between versions 3.4 and 6.1 a single switch, it has just switched them to do nothing (-Wimport, -Wunreachable-code, and -Wmudflap switches). Clang on the other hand has removed multiple switches between versions and for example there is no references to -Wcxx98-cxx11-compat in the current codebase even if clang 3.3 had such switch.

    Examining differences visually

    Generating large purely textual differences between different files becomes quite cumbersome quite soon if you want to do anything more complicated than a simple difference of unique command line options between two subsequent versions. For example if we look at figure 1 that shows what other warnings -Wall flag enables in GCC 6 when compared to GCC 5. We can see that there are quite many extra warnings added to -Wall switch so newer compiler versions provide extra analysis capabilities even without adding all the new options individually.

    Meld showing differences what flags
-Wall enables between GCC 5 and 6.

    Figure 1: Meld showing differences what flags -Wall enables between GCC 5 and 6.

    From figure 2 we can also see that GCC 6 uses -Wc++11-compat as the default warning flag indicating differences between ISO C++ 1998 and ISO C++ 2011 for constructs that have the same name instead of -Wc++0x-compat, that refers to a draft standard. So GCC has basically deprecated -Wc++0x-compat switch in favor of a switch that refers to the actual standard.

    -Wc++0x-compat is an alias of -Wc++11-compat in GCC 6 instead the other way around.

    Figure 2: -Wc++0x-compat is an alias of -Wc++11-compat in GCC 6 instead the other way around.

    Suggestions for usable warning options

    I won’t be giving any specific suggestions here for warning flags, as there seem to be new options for each subsequent compiler release. A good place to start is NASA’s JPL Institutional Coding Standard for the C Programming Language that includes a very short list of rudimentary warning flags for GCC. It also includes a short list of coding standards of which each one would have prevented a mission failure for NASA. SEI CERT coding standards for secure coding also provide various automatically generated lists for clang warning flags and GCC warning flags based on the issues that these standards take into account.

    And finally, check out the warning flag lists for clang and GCC and make your own combinations that bring the most benefit for whatever you are working with. Not all of them are appropriate for your project and some of them may be even working against the useful development patterns that you have.

    Cautionary tales about compiler warnings flags

    Even though it might sound like a good idea to rush and fix all the issues that these new compiler warning flags uncover, it might actually cause some new bugs to pop up. Specifically SQLite database engine has had its own take on compiler warnings and their fixing and they have concluded that fixing compiler warnings actually has produced some extra bugs that would not have come into light if there would have not been tries to fix compiler warnings.

    I have also had my own take on compiler warning fixes and sometimes I have screwed up and messed up with a perfectly working code while fixing a misleading warning. But generally my own experience has lead to more fixes than what there have been bugs. And the coolest thing is, that having these warnings enabled as the standard development process prevent some bugs from ever creeping up to the application in the first place.

  • Being a good CPU neighbor

    Very often computational tasks are roughly divided into real-time and batch processing tasks. Sometimes you might want to run some tasks that take a large amount of computation resources, get the result as fast as possible, but you don’t really care exactly when you get the result out.

    In larger organizations there are shared computers that usually have a lot of CPU power and memory available and that are mostly idle. But they are also used by other users that will be quite annoyed if they suffer from long delays to key presses or from similar issues because your batch processing tasks take all the resources that are available away from the system. Fortunately *nix systems provide different mechanisms available to non-root users to avoid these kind of issues.

    These issue can also raise in situations where you have a system that can only allocate static resource requirements to tasks, like in Jenkins. As an example, you might have some test jobs that need to finish within certain time limit mixed with compilation jobs that should finish as soon as possible, but that can yield some resources to these higher priority test jobs, as they don’t have as strict time limitation requirements. And you usually don’t want to limit the resources that compilation jobs can use, if they are run on otherwise idle machine.

    Here I’m specifically focusing on process priority settings and other CPU scheduling settings provided by Linux, as it currently happens to be probably the most used operating system kernel for multi-user systems. These settings affect how much CPU time a process gets relative to other processes and are especially useful on shared overloaded systems where there are more processes running than what there are CPU cores available.

    Different process scheduling priorities in Linux

    Linux and POSIX interfaces provide different scheduling policies that define how Linux scheduler allocates CPU time to a process. There are two main scheduling policies for processes: real-time priority policies and normal scheduling policies. Real-time policies are usually accessible only to root user and are not the point of interest of here, as they can not usually be used by normal users on shared systems.

    At the time of the writing this article there are three normal scheduling policies available to normal users for process priorities:

    1. SCHED_OTHER: the default scheduling policy in Linux with the default dynamic priority of 0.
    2. SCHED_BATCH: policy for scheduling batch processes. This schedules processes in similar fashion as SCHED_OTHER and is affected by the dynamic priority of a process. This makes the scheduler assume that the process is CPU intensive and makes it harder to wake up from a sleep.
    3. SCHED_IDLE: the lowest priority scheduling policy. Not affected by dynamic priorities. This equals to nice level priority of 20.

    The standard *nix way to change process priorities is to use nice command. By default in Linux, process can get nice values of -20–19 where the default nice value for a process is 0. When process is started with nice command, the nice value will be

    1. Values 0–19 are available to a normal user and -20–-1 are available to the root user. The lowest priority nice value of 19 gives process 5% of CPU time with the default scheduler in Linux.

    chrt command can be used to give a process access to SCHED_BATCH and SCHED_IDLE scheduling policies. chrt can be used with combination of nice command to lower the priority of SCHED_BATCH scheduling policy. And using SCHED_IDLE (= nice level 20) policy should give around 80% of the CPU time that nice level of 19 has, as nice level weights increase the process priority by 1.25 compared to lower priority level.

    Benchmarking with different competing workloads

    I wanted to benchmark the effect of different scheduling policies on a work done by a specific benchmark program. I used a system with Linux 3.17.0 kernel with Intel Core i7-3770K CPU at 3.5 GHz processor with hyperthreading and frequency scaling turned on and 32 gigabytes of RAM. I didn’t manage to make the CPU frequency constant, so results vary a little between runs. I used John the Ripper’s bcrypt benchmark started with following command as the test program for useful work:

    $ /usr/sbin/john --format=bcrypt -test
    

    I benchmarked 1 and 7 instances of John the Ripper for 9 iterations (9 * 5 = 45 seconds) with various amounts of non-benchmarking processes running at the same time. 1 benchmarking process should by default not have any cache contention resulting from other benchmarking processes happening and 7 benchmarking processes saturate all but 1 logical CPU core and can have cache and computational unit contention with each other.

    Non-bechmarking processes were divided into different categories and there were 0–7 times CPU cores instances of them started to make them fight for CPU time with benchmarking processes. The results of running different amounts of non-benchmarking processes with different scheduling policies can be found from tables 1, 2, 3, and 4. Different scheduling policies visible in those tables are:

    • default: default scheduling policy. Command started without any wrappers. Corresponds to SCHED_OTHER with nice value of 0.
    • chrt-batch: the default SCHED_BATCH scheduling policy. Command started with chrt --batch 0 prefix.
    • nice 10: SCHED_OTHER scheduling policy with the default nice value of 10. Command started with nice prefix.
    • nice 19: SCHED_OTHER scheduling policy with nice value of 19. Command started with nice -n 19 prefix. This should theoretically take around 5 % of CPU time out from the useful work.
    • nice 19 batch: SCHED_BATCH scheduling policy with nice value of 19. Command started with nice -n 19 chrt --batch 0 prefix.
    • sched idle: SCHED_IDLE scheduling policy. Command started with chrt --idle 0 prefix. This should theoretically take around 80 % of CPU time compared to nice level 19 out from useful work.

    The results in those tables reflect the relative average percentage of work done compared to the situation when there are no additional processes disturbing the benchmark. These only show the average value and do not show, for example, variance that could be useful to determine how close those values are of each other. You can download get the raw benchmarking data and the source code to generate and analyze this data from following links:

    CPU looper

    CPU looper applications consists purely of an application that causes CPU load. Its main purpose is just to test the scheduling policies without forcing CPU cache misses. Of course there will be context switches that can replace data cache entries related to process state and instruction cache entries for the process with another, but there won’t be any extra intentional cache flushing happening here.

    The actual CPU looper application consists code shown in figure 1 compiled without any optimizations:

    int main() {
        while (1) {}
        return 0;
    }
    
    Figure 1: source code for CPU looper program.

    The results for starting 0–7 times CPU cores of CPU looper processes with the default priority and then running benchmark with 1 or 7 cores can be found from tables 1 and 2.

    1 worker01234567
    default10064.636.425.518.215.412.511.0
    nice 0 vs. sched batch10066.537.024.016.314.812.210.9
    nice 0 vs. nice 1010093.390.282.775.475.275.074.9
    nice 0 vs. nice 1910092.990.491.091.889.889.490.9
    nice 0 vs. nice 19 batch10094.089.291.489.886.489.490.7
    nice 0 vs. sched idle10095.091.392.292.092.691.092.4
    —          
    nice 10 vs. nice 1910080.074.768.067.767.564.560.7
    nice 10 vs. nice 19 batch10092.583.875.975.074.374.169.8
    nice 10 vs. sched idle10089.889.790.989.087.887.487.2
    —          
    nice 19 vs. sched idle10077.969.766.966.364.663.458.1
    Table 1: the relative percentage of average work done for one benchmarking worker when there are 0–7 times the logical CPU cores of non-benchmarking CPU hogging jobs running with different scheduling policies.
    7 workers01234567
    default10049.833.624.619.416.413.812.2
    nice 0 vs. sched batch10051.134.025.120.016.614.212.4
    nice 0 vs. nice 1010092.987.280.175.071.066.361.1
    nice 0 vs. nice 1910094.194.995.495.296.295.796.0
    nice 0 vs. nice 19 batch10095.795.295.795.595.294.695.4
    nice 0 vs. sched idle10096.395.796.295.696.795.496.7
    —          
    nice 10 vs. nice 1910093.684.978.071.465.760.456.2
    nice 10 vs. nice 19 batch10092.484.177.669.965.660.756.4
    nice 10 vs. sched idle10095.594.994.593.994.393.791.9
    —          
    nice 19 vs. sched idle10091.480.971.464.358.352.548.5
    Table 2: the relative percentage of average work done for seven benchmarking workers when there are 0–7 times the logical CPU cores of non-benchmarking CPU hogging jobs running with different scheduling policies.

    Tables 1 and 2 show that the effect is not consistent between different scheduling policies and when there is 1 benchmarking worker running it suffers more from the lower priority processes than what happens when there are 7 benchmarking workers running. But with higher priority scheduling policies for background processes the average amount of work done for 1 process remains higher for light loads than with 7 worker processes. These discrepancies can be probably explained by how hyperthreading works by sharing the same physical CPU cores and by caching issues.

    Memory looper

    Processors nowadays have multiple levels of cache and no cache isolation and memory access from one core can wreak havoc for other cores with just moving data from and to memory. So I wanted to see what happens with different scheduling policies when running multiple instances of a simple program that run on the background when John the Ripper is trying to do some real work.

    #include <stdlib.h>
    
    int main() {
        // 2 * 20 megabytes should be enough to spill all caches.
        const size_t data_size = 20000000;
        const size_t items = data_size / sizeof(long);
        long* area_source = calloc(items, sizeof(long));
        long* area_target = calloc(items, sizeof(long));
        while (1) {
            // Just do some memory operations that access the whole memory area.
            for (size_t i = 0; i < items; i++) {
                area_source[i] = area_target[i] + 1;
            }
            for (size_t i = 0; i < items; i++) {
                area_target[i] ^= area_source[i];
            }
        }
        return 0;
    }
    
    Figure 2: source code for memory bandwidth hogging program.

    Program shown in figure 2 is compiled without any optimizations and it basically reads one word from memory, adds 1 to it and stores the result to some other memory area and then XORs the read memory area with the written memory area. So it basically does not do anything useful, but reads and writes a lot of data into memory every time the program gets some execution time.

    Tables 3 and 4 show the relative effect on John the Ripper benchmarking program when there are various amounts of the program shown in figure 2 running at the same time. If you compare these numbers to values shown in tables 1 and 2 a program that only uses CPU cycles is running, the numbers for useful work can be in some cases around 10 percentage points lower. So there is apparently some cache contention ongoing with this benchmarking program and the effect of lower priority scheduling policies is not the same that could be theoretically expected just from the allocated time slices.

    1 worker01234567
    default priority10061.732.621.816.613.311.29.8
    nice 0 vs. sched batch10060.232.622.716.212.910.910.2
    nice 0 vs. nice 1010085.282.175.870.869.670.568.1
    nice 0 vs. nice 1910085.483.079.781.778.281.984.7
    nice 0 vs. nice 19 batch10083.881.280.583.480.479.484.3
    nice 0 vs. sched idle10082.980.981.282.180.682.381.8
    —          
    nice 10 vs. nice 1910080.074.768.067.767.564.560.7
    nice 10 vs. nice 19 batch10080.874.167.667.167.965.563.3
    nice 10 vs. sched idle10083.981.679.277.075.976.877.1
    —          
    nice 19 vs. sched idle10077.969.766.966.364.663.458.1
             
    Table 3: the relative percentage of average work done for one benchmarking worker when there are 0–7 times the logical CPU cores of non-benchmarking CPU and memory bandwidth hogging jobs running with different scheduling policies.
    7 workers01234567
    default10048.432.624.519.716.314.212.3
    nice 0 vs. sched batch10048.632.223.719.416.214.012.6
    nice 0 vs. nice 1010089.381.574.669.264.660.456.3
    nice 0 vs. nice 1910092.090.590.491.291.390.690.5
    nice 0 vs. nice 19 batch10091.591.892.591.891.992.191.9
    nice 0 vs. sched idle10092.191.991.991.592.092.492.1
    —          
    nice 10 vs. nice 1910085.477.370.463.358.354.250.3
    nice 10 vs. nice 19 batch10086.877.770.063.458.854.450.6
    nice 10 vs. sched idle10090.689.088.988.988.087.486.3
    —          
    nice 19 vs. sched idle10082.872.962.957.352.248.244.1
             
    Table 4: the relative percentage of average work done for seven benchmarking workers when there are 0–7 times the logical CPU cores of non-benchmarking CPU and memory bandwidth hogging jobs running with different scheduling policies.

    Conclusions

    These are just results of one specific benchmark with two specific workloads on one specific machine with 4 hyperthreaded CPU cores. They should anyways give you some kind of an idea how different CPU scheduling policies under Linux affect the load that you do not want to disturb when your machine is more or less overloaded. Clearly when you are reading and writing data from and to memory, the lower priority background process has bigger impact on the actual worker process than what the allocated time slices would predict. But, unsurprisingly, a single user can have quite a large impact on how much their long running CPU hogging processes affect rest of the machine.

    I did not do any investigation how much work those processes that are supposed to disturb the benchmarking get done. In this case SCHED_BATCH with nice value of 19 could probably be the best scheduling policy if we want to get the most work done and at the same time avoid disturbing other users. Otherwise, it looks like the SCHED_IDLE policy, taken into use with chrt --idle 0 command, that is promised to have the lowest impact on other processes has the lowest impact. Especially when considering processes started with lower nice values than the default one.

Older articles • Page 1/2

subscribe via RSS