i'd start code
#define myconst (419*0.9) #define myconst1 (419*.9) #define myconst9 (myconst1*.9) #define myconst11 (419*1.1) int main() { printf("myconst:%d\n",myconst); printf("myconst1:%d\n",myconst1); printf("myconst9:%d\n",myconst9); printf("myconst11:%d\n",myconst11); return 0; }
which gave me output
myconst:1913895624
myconst1:1
myconst9:1
myconst11:1
i'm unaware why multiplication fraction in #define leading different value expected? , value of myconst varying continuously example, value observed -254802840 -1713343480 466029496. , myconst1,myconst9 , myconst11 giving me 1, isn't logical according me.
any explanation regarding same welcome.
the #define
has no role in this; problem passing double
when printf
expecting int
(due %d
specifier); technically undefined behavior, allowed happen.
now, if built on 32 bit x86 int
reinterpretation of lower 32 bits of float, since parameters passed on stack. guess running on 64 bit x86, parameters passed in different registers if floating point. in case, printf
reading whatever garbage happens in register @ time, there's not logic involved in values.
notice compiler can warn kind of mismatch between format string , values passed. gcc , clang suggest enable warning (-wformat
, included in -wall
, should definitely enable), , possibly mark actual error (-werror=format
), there's no legitimate reason broken printf
this.