Inefficient implementation of ArrayKernel::opposite_halfedge_handle()
Currently, ArrayKernel::opposite_halfedge_handle() is implemented like this:
HalfedgeHandle opposite_halfedge_handle(HalfedgeHandle _heh) const
{ return HalfedgeHandle((_heh.idx() & 1) ? _heh.idx()-1 : _heh.idx()+1); }
Here is what gcc makes of this with -O2:
0x00000000004594a0 <+0>: lea -0x1(%rsi),%edx
0x00000000004594a3 <+3>: lea 0x1(%rsi),%eax
0x00000000004594a6 <+6>: and $0x1,%esi
0x00000000004594a9 <+9>: cmovne %edx,%eax
0x00000000004594ac <+12>: retq
Why don't we change it to this:
HalfedgeHandle opposite_halfedge_handle(HalfedgeHandle _heh) const
{ return HalfedgeHandle(_heh.idx() ^ 1); }
gcc -O2 compiles this into
0x00000000004594a0 <+0>: mov %esi,%eax
0x00000000004594a2 <+2>: xor $0x1,%eax
0x00000000004594a5 <+5>: retq
which certainly looks more efficient to me (no conditional, fewer instructions).