get<N>(tuple) considered harmful

I have been toying around with a custom implementation of tuple and came across this pitfall. Suppose you need a tool that would provide a tuple that would be the tail of a given tuple. What’s a tuple? It can be std::tuple or it can be mytuple, so the tool should best be dependent on the tuple’s template template together with the tuple’s type pack:

    template<template <class...> class Tuple, class ArgHead, class... ArgTail>
auto tuple_tail(Tuple<ArgHead,ArgTail...>& t) -> Tuple<ArgTail...>;

The implementation must construct a tuple with pack expansion involving only the tail arguments. Types are easy, but to get the values from t we need an index sequence going from 1 to sizeof...(T)-1. It is straightforward to get a sequence like that with the use of std::index_sequence_for, but then it starts at 0, that we need to drop. We then send the index sequence together with the tuple t to a helper function that does the actual job. All of that should be fairly straightforward c++ for anybody with a couple of years of experience (did I say a couple? I meant a couple of dozen, of course…):
Read more of this post

On would–be const constructors

In this post I would like to discuss the idea of having constructors, or some equivalent, designed to build only objects that are const. My motivating example is an attempt to implement matrices and matrix views. I believe this is a canonical example where temptation to have const constructors appears in const methods of the principal class, the chief context in which one has limited data that suffice to only build a const–restricted object. I will try to be clever implementing them, and then show why it does not really work, and what lessons should be learnt from that exercise.
Read more of this post

Generating set partitions – replacing virtual functions with protected inheritance

The purpose of this long post is twofold: first, I really am in need of an efficiently implemented set partition enumerating library, and second, I would like to use this work as a pretext to share and discuss the designs that I considered. The latter is the main part here, as the algorithms themselves are taken from other people’s work, and the c++ discussion does not require a thorough understanding of them.

A set partition is a way to split elements of a finite set into disjoint subsets, called blocks. A way to model it is to take the set of natural numbers {0,1,2,…,n} as the initial set, and give each member a block number. Thus, for n=4 the partition (0,1,0,1) defines two blocks, block 0 containing elements 0 and 2, and block 1 containing elements 1 and 3. Of course, had we taken partition (1,0,1,0), we would have got the same split, so to avoid repetition there has to be a normalization rule: we start with block 0, and if a new block is needed, it gets the next number unused so far. A nice overview of the algorithms that accomplish this task has been gathered by Michael Orlov (section Technical reports at ), there’s even an accompanying implementation, but I think that with modern c++ one can now do better. And I need it for a student of mine who is doing research on some industrial process optimisation as well as for purely mathematical research, set partitions have a lot to do with probability, see for instance the explanatory paper by Roland Speicher:

1. Elementary solution for all partitions

Partitions of fixed size can be looked upon as an ordered collection of immutable elements, and the most natural way to implement this kind of objects in c++ is to have a collection object equipped with iterators, technically speaking const_iterators. Actually, most work is done in iterators in this case, the collection object is just needed for the convenience of having begin and end methods, which allows for instance the range for loop syntax. Michael Orlov’s paper details four algorithms: initialization of first and last partition and generation of next and previous partition, in terms of two sequences, a current partition k and a running maximum of used block numbers m. I will not analyse the algorithms themselves, it is not necessary for the c++ discussion that follows, and Michael has already done a prefect job, please consult his text. So, my first take at implementing the unrestricted set partitions is as follows:
Read more of this post

Introduction to random number generation in c++

This article is intended as a concise introduction into using the (pseudo)-random number generation facilities present in c++11. What I write about certainly isn’t new, but I need an easy-reading place I can point people to. That does not mean there are no problems with the standard services, there are some are fundamental, and are mentioned in the articles I cite, others are programmatic and could use some, well, circumventing, that comes with all the c++ intricacy delight, but I will leave that out waiting for a future second installment on the subject.

Structure of the problem

So you want a random number from a computer, right? You think it really is possible? After all, if your computer, a deterministic machine designed to follow your progam to the letter, started doing things at random, you would most likely declare it broken and either try to repair it or scrap it. No way. And yet, you play games, you do your Monte-Carlo simulations and all that claims to be using some sort of randomness. So where is it to be found in your computer? The answer has three parts.
Read more of this post

Nested class privacy leak: bug or feature?

Most programmers know classes in c++ can be nested, and the nested ones can be made private if they are not needed outside the implementation of the host class. The canonical example is a linked list:
Read more of this post

Polymorphic objects with value semantics – custom interfaces

polymorphic values

In my previous post I showed how we could provide object handlers, called polymorhic, that were the cure to most of the troubles that follow from using pointers, even smart pointers. Given class b with base class a we could write

polymorphic<a> p1(create<b>(args_to_constructor_of_b));
p1->foo();              // properly calls foo virtual or not
polymorphic<a> p2 = p1; // real deep copy of contained b
const auto p3 = std::move(p1); // real move from p1
p3->mutate();           // doesn't compile, implements ordinary const semantics
b bobj;
p2 = bobj;                 // properly copies bobj of type b into p2
std::cout << p2;        // calls operator<< suitable for b
}              // p2,p3 correctly delete b even with no virtual destructor in a

All this boon has been achieved by keeping and leveraging a second virtual dispatch table inside polymorphic. The remaining question was how to open the interface of polymorphic to using additional configurable interfaces–vtables. I will try to present two solutions to this problem in this post.
Read more of this post

Polymorphic objects with value semantics

(Updated on 27.11.2016 to correct the forwarding through tuples infrastructure, corrected code shows in the post, old version is available in collapsed sections. Update to the follow-up post will be posted in a few days.)

My last article got posted on reddit, quite unexpectedly for me, and in the comments there some people mentioned things they find abominable in c++ or in my coding. That’s great to know someone cares, so I decided to share one thing I personally find abominable, and some ideas how to cope. Please don’t hesitate to express your opinion, as usual, there’s a comment section below…

Too low level unique_ptr

C++11 introduced unique_ptr, which, together with move semantics, finally allowed for a reasonable management of heap-allocated objects. Unfortunately, one was still forced to write std::unique_ptr<A> ua(new A{}), keeping new on the client’s side of things. This was somewhat corrected in C++14, where you can write std::unique_ptr<A> pa = make_unique<A>{}. However, while unique_ptr has its merits in low-level implementations, I think that it is an abomination as far as dealing with objects belonging to class hierarchies is concerned. I say this, because there is no way to make deep copies (unless you hand-craft zilions of boilerplate clone methods all over the hierarchy), you can shoot yourself in the foot if you forget to make your base destructor virtual, the pointer objects have pointer const semantics, there is no immediate way to convert an existing object of one of the classes into the pointer, make_unique is the preferred way of creating the pointer but it is impossible to create a nullptr pointer with it so this case needs special treatment, this has no chance of working: std::cout << *pa, and, finally, you can call .get() on the smart pointer and leak the result (see for instance for possible problems). I will try to address all these issues in this article. I am not sure, as usual, if somebody hasn’t already invented what I present here, if you know any references, please let me know. For a discussion on OO (aka class hierarchies) vs. value sematics see

Read more of this post