c++ - Standard-compliant way to define a floating-point equivalence relationship -


i'm aware of usual issues floating point arithmetic , precision loss, not usual question why 0.1 + 0.2 != 0.3 , like.

instead, implement binary predicate in c++ (in 100% standard compliant way) implements real mathematical equivalence relationship (i.e. being reflexive, transitive, , symmetric), such 2 doubles in same equivalence class if represent exact same value in respects, distinguishing corner cases 0.0 , -0.0 treating nan values being in same equivalence class. (in particular, default == not want because is non-reflexive in case of nan, , not distinguish between 0.0 , negative -0.0, in different equivalence classes, different values , lead different runtime behaviors).

what's shortest , simplest way not rely on type punning in way or implementation-defined behavior? far i've got:

#include <cmath>  bool equiv(double x, double y) {        return (x == y && (x != 0.0 || std::signbit(x) == std::signbit(y))) ||            (std::isnan(x) && std::isnan(y)); } 

i believe handles corner cases know , described earlier, there other corner cases doesn't handle i'm missing? , above binary predicate guaranteed define equivalence relationship according c++ standard, or of behavior unspecified, implementation-defined, etc.?

looks right.

you can rid of function calls platforms implement ieee 754 (intel's, power's , arms do) because special floating point values can determined without calls.

bool equiv(double x, double y) {     return (x == y && (x || (1 / x == 1 / y))) || (x != x && y != y); } 

the above uses fact ieee:

  • division of non-zero 0 yields infinity special values retain sign. hence 1 / -0. yields -infinity. infinity special values same sign compare equal.
  • nans not compare equal.

the original version though, reads better people. judging interviewing experience, not every developer knows how special floating point values arise , behave.

if nans had 1 representation memcmp.


with regards c++ , c language standards, the new c standard book says:

the term ieee floating point heard. usage came because original standards on topic published ieee. standard binary floating-point arithmetic many host processors have been providing on decade. however, use not mandated c99.

the representation binary floating-point specified in standard used intel x86 processor family, sun sparc, hp pa-risc, ibm p ower pc, hp–was dec – alpha, , majority of modern processors (some dsp processors support subset, or make small changes, cost/performance reasons; while others have more substantial differences e.g., tms320c3x uses two’s complement). there publicly available software implementation of standard.

other representations still supported processors (ibm 390 , hp–was dec – vax) having existing customer base predates publication documents on standard based. these representations continue supported time because of existing code relies on (the ibm 390 , hp–was dec– alpha support both companies respective older representations , iec 60559 requirements).

there common belief once iec 60559 standard has been specified of required functionality provided conforming implementations. possible c program’s dependencies on iec 60559 constructs, can vary between implementations, not documented because of common, incorrect belief (the person writing documentation not person familiar standard).

like c standard iec 60559 standard not specify behavior of every construct. provides optional behavior constructs, such when underflow raised, , has optional constructs implementation may or may not make use of, such double standard. c99 not provide method finding out implementation’s behavior in these optional areas. instance, there no standard macros describing various options handling underflow.

what every computer scientist should know floating-point arithmetic says:

languages , compilers

ambiguity

ideally, language definition should define semantics of language precisely enough prove statements programs. while true integer part of language, language definitions have large grey area when comes floating-point. perhaps due fact many language designers believe nothing can proven floating-point, since entails rounding error. if so, previous sections have demonstrated fallacy in reasoning. section discusses common grey areas in language definitions, including suggestions how deal them.

... ambiguity in language definitions concerns happens on overflow, underflow , other exceptions. ieee standard precisely specifies behavior of exceptions, , languages use standard model can avoid ambiguity on point.

... grey area concerns interpretation of parentheses. due round-off errors, associative laws of algebra not hold floating-point numbers... whether or not language standard specifies parenthesis must honored, (x+y)+z can have totally different answer x+(y+z), discussed above.

.... rounding can problem. ieee standard defines rounding precisely, , depends on current value of rounding modes. conflicts definition of implicit rounding in type conversions or explicit round function in languages.

the language standards cannot possibly specify results of floating point operations because, example, 1 can change rounding mode @ run-time using std::fesetround.

so c , c++ languages have no other choice map operations on floating point types directly hardware instructions , not interfere, do. hence, languages not copy ieee/iec standard , not mandate either.


Comments

Popular posts from this blog

c++ - Difference between pre and post decrement in recursive function argument -

php - Nothing but 'run(); ' when browsing to my local project, how do I fix this? -

php - How can I echo out this array? -