linux - [Solved-2 Solutions] Why does the C preprocessor interpret the word “linux” as the constant “1” ? - ubuntu - red hat - debian - linux server - linux pc



Linux - Problem :

Why does the C preprocessor interpret the word “linux” as the constant “1” ?

test.c:

#include <stdio.h>
int main(void)
{       
    int linux = 5;
    return 0;
}

Result of $ gcc -E test.c:

....
int main(void)
{
    int 1 = 5;
    return 0;
}

Linux - Solution 1:

  • Predefining symbols such as unix and vax was a way to allow code to identify at compile time what system it was being compiled for. There was no official language standard in those days , and C code of any complexity was typically a complex maze of #ifdefs to allow for differences between systems.
  • These macro definitions were generally set by the compiler itself, not defined in a library header file.
  • Since there were no real rules about which identifiers could be used by the implementation and which were reserved for programmers, compiler writers felt free to use simple names like unix and assumed that programmers would simply avoid using those names for their own purposes.
  • ANSI C standard introduced rules restricting what symbols an implementation could legally predefine.
  • A macro predefined by the compiler could only have a name starting with two underscores, or with an underscore followed by an uppercase letter, leaving programmers free to use identifiers not matching that pattern and not used in the standard library.
  • As a result, any compiler that predefines unix or linux is non-conforming, since it will fail to compile perfectly legal code that uses something like int linux = 5;
  • As it happens, gcc is non-conforming by default -- but it can be made to conform with the right command-line options:
gcc -std=c90 -pedantic ... # or -std=c89 or -ansi
gcc -std=c99 -pedantic
gcc -std=c11 -pedantic
click below button to copy the code. By - Linux tutorial - team

If your program needs to know whether it's being compiled for a Linux target or not it can check whether __linux__ is defined (assuming you're using gcc or a compiler that's compatible with it).

main() { printf(&unix["\021%six\012\0"],(unix)["have"]+"fun"-0x60);}
click below button to copy the code. By - Linux tutorial - team

It prints "unix", but for reasons that have absolutely nothing to do with the spelling of the macro name.

Linux - Solution 2:

Use this command

gcc -dM -E - < /dev/null
click below button to copy the code. By - Linux tutorial - team

To get this Output like this

    #define _LP64 1
#define _STDC_PREDEF_H 1
#define __ATOMIC_ACQUIRE 2
#define __ATOMIC_ACQ_REL 4
#define __ATOMIC_CONSUME 1
#define __ATOMIC_HLE_ACQUIRE 65536
#define __ATOMIC_HLE_RELEASE 131072
#define __ATOMIC_RELAXED 0
#define __ATOMIC_RELEASE 3
#define __ATOMIC_SEQ_CST 5
#define __BIGGEST_ALIGNMENT__ 16
#define __BYTE_ORDER__ __ORDER_LITTLE_ENDIAN__
#define __CHAR16_TYPE__ short unsigned int
#define __CHAR32_TYPE__ unsigned int
#define __CHAR_BIT__ 8
#define __DBL_DECIMAL_DIG__ 17
#define __DBL_DENORM_MIN__ ((double)4.94065645841246544177e-324L)
#define __DBL_DIG__ 15
#define __DBL_EPSILON__ ((double)2.22044604925031308085e-16L)
#define __DBL_HAS_DENORM__ 1
#define __DBL_HAS_INFINITY__ 1
#define __DBL_HAS_QUIET_NAN__ 1
#define __DBL_MANT_DIG__ 53
#define __DBL_MAX_10_EXP__ 308
#define __DBL_MAX_EXP__ 1024
#define __DBL_MAX__ ((double)1.79769313486231570815e+308L)
#define __DBL_MIN_10_EXP__ (-307)
#define __DBL_MIN_EXP__ (-1021)
#define __DBL_MIN__ ((double)2.22507385850720138309e-308L)
#define __DEC128_EPSILON__ 1E-33DL
#define __DEC128_MANT_DIG__ 34
#define __DEC128_MAX_EXP__ 6145
#define __DEC128_MAX__ 9.999999999999999999999999999999999E6144DL
#define __DEC128_MIN_EXP__ (-6142)
#define __DEC128_MIN__ 1E-6143DL
#define __DEC128_SUBNORMAL_MIN__ 0.000000000000000000000000000000001E-6143DL
#define __DEC32_EPSILON__ 1E-6DF
#define __DEC32_MANT_DIG__ 7
#define __DEC32_MAX_EXP__ 97
#define __DEC32_MAX__ 9.999999E96DF
#define __DEC32_MIN_EXP__ (-94)
#define __DEC32_MIN__ 1E-95DF
#define __DEC32_SUBNORMAL_MIN__ 0.000001E-95DF
#define __DEC64_EPSILON__ 1E-15DD
#define __DEC64_MANT_DIG__ 16
#define __DEC64_MAX_EXP__ 385
#define __DEC64_MAX__ 9.999999999999999E384DD
#define __DEC64_MIN_EXP__ (-382)
#define __DEC64_MIN__ 1E-383DD
#define __DEC64_SUBNORMAL_MIN__ 0.000000000000001E-383DD
#define __DECIMAL_BID_FORMAT__ 1
#define __DECIMAL_DIG__ 21
#define __DEC_EVAL_METHOD__ 2
#define __ELF__ 1
#define __FINITE_MATH_ONLY__ 0
#define __FLOAT_WORD_ORDER__ __ORDER_LITTLE_ENDIAN__
#define __FLT_DECIMAL_DIG__ 9
#define __FLT_DENORM_MIN__ 1.40129846432481707092e-45F
#define __FLT_DIG__ 6
#define __FLT_EPSILON__ 1.19209289550781250000e-7F
#define __FLT_EVAL_METHOD__ 0
#define __FLT_HAS_DENORM__ 1
#define __FLT_HAS_INFINITY__ 1
#define __FLT_HAS_QUIET_NAN__ 1
#define __FLT_MANT_DIG__ 24
#define __FLT_MAX_10_EXP__ 38
#define __FLT_MAX_EXP__ 128
#define __FLT_MAX__ 3.40282346638528859812e+38F
#define __FLT_MIN_10_EXP__ (-37)
#define __FLT_MIN_EXP__ (-125)
#define __FLT_MIN__ 1.17549435082228750797e-38F
#define __FLT_RADIX__ 2
#define __FXSR__ 1
#define __GCC_ASM_FLAG_OUTPUTS__ 1
#define __GCC_ATOMIC_BOOL_LOCK_FREE 2
#define __GCC_ATOMIC_CHAR16_T_LOCK_FREE 2
#define __GCC_ATOMIC_CHAR32_T_LOCK_FREE 2
#define __GCC_ATOMIC_CHAR_LOCK_FREE 2
#define __GCC_ATOMIC_INT_LOCK_FREE 2
#define __GCC_ATOMIC_LLONG_LOCK_FREE 2
#define __GCC_ATOMIC_LONG_LOCK_FREE 2
#define __GCC_ATOMIC_POINTER_LOCK_FREE 2
#define __GCC_ATOMIC_SHORT_LOCK_FREE 2
#define __GCC_ATOMIC_TEST_AND_SET_TRUEVAL 1
#define __GCC_ATOMIC_WCHAR_T_LOCK_FREE 2
#define __GCC_HAVE_DWARF2_CFI_ASM 1
#define __GCC_HAVE_SYNC_COMPARE_AND_SWAP_1 1
#define __GCC_HAVE_SYNC_COMPARE_AND_SWAP_2 1
#define __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 1
#define __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8 1
#define __GCC_IEC_559 2
#define __GCC_IEC_559_COMPLEX 2
#define __GNUC_MINOR__ 3
#define __GNUC_PATCHLEVEL__ 0
#define __GNUC_STDC_INLINE__ 1
#define __GNUC__ 6
#define __GXX_ABI_VERSION 1010
#define __INT16_C(c) c
#define __INT16_MAX__ 0x7fff
#define __INT16_TYPE__ short int
#define __INT32_C(c) c
#define __INT32_MAX__ 0x7fffffff
#define __INT32_TYPE__ int
#define __INT64_C(c) c ## L
#define __INT64_MAX__ 0x7fffffffffffffffL
#define __INT64_TYPE__ long int
#define __INT8_C(c) c
#define __INT8_MAX__ 0x7f
#define __INT8_TYPE__ signed char
#define __INTMAX_C(c) c ## L
#define __INTMAX_MAX__ 0x7fffffffffffffffL
#define __INTMAX_TYPE__ long int
#define __INTPTR_MAX__ 0x7fffffffffffffffL
#define __INTPTR_TYPE__ long int
#define __INT_FAST16_MAX__ 0x7fffffffffffffffL
#define __INT_FAST16_TYPE__ long int
#define __INT_FAST32_MAX__ 0x7fffffffffffffffL
#define __INT_FAST32_TYPE__ long int
#define __INT_FAST64_MAX__ 0x7fffffffffffffffL
#define __INT_FAST64_TYPE__ long int
#define __INT_FAST8_MAX__ 0x7f
#define __INT_FAST8_TYPE__ signed char
#define __INT_LEAST16_MAX__ 0x7fff
#define __INT_LEAST16_TYPE__ short int
#define __INT_LEAST32_MAX__ 0x7fffffff
#define __INT_LEAST32_TYPE__ int
#define __INT_LEAST64_MAX__ 0x7fffffffffffffffL
#define __INT_LEAST64_TYPE__ long int
#define __INT_LEAST8_MAX__ 0x7f
#define __INT_LEAST8_TYPE__ signed char
#define __INT_MAX__ 0x7fffffff
#define __LDBL_DENORM_MIN__ 3.64519953188247460253e-4951L
#define __LDBL_DIG__ 18
#define __LDBL_EPSILON__ 1.08420217248550443401e-19L
#define __LDBL_HAS_DENORM__ 1
#define __LDBL_HAS_INFINITY__ 1
#define __LDBL_HAS_QUIET_NAN__ 1
#define __LDBL_MANT_DIG__ 64
#define __LDBL_MAX_10_EXP__ 4932
#define __LDBL_MAX_EXP__ 16384
#define __LDBL_MAX__ 1.18973149535723176502e+4932L
#define __LDBL_MIN_10_EXP__ (-4931)
#define __LDBL_MIN_EXP__ (-16381)
#define __LDBL_MIN__ 3.36210314311209350626e-4932L
#define __LONG_LONG_MAX__ 0x7fffffffffffffffLL
#define __LONG_MAX__ 0x7fffffffffffffffL
#define __LP64__ 1
#define __MMX__ 1
#define __NO_INLINE__ 1
#define __ORDER_BIG_ENDIAN__ 4321
#define __ORDER_LITTLE_ENDIAN__ 1234
#define __ORDER_PDP_ENDIAN__ 3412
#define __PIC__ 2
#define __PIE__ 2
#define __PRAGMA_REDEFINE_EXTNAME 1
#define __PTRDIFF_MAX__ 0x7fffffffffffffffL
#define __PTRDIFF_TYPE__ long int
#define __REGISTER_PREFIX__ 
#define __SCHAR_MAX__ 0x7f
#define __SEG_FS 1
#define __SEG_GS 1
#define __SHRT_MAX__ 0x7fff
#define __SIG_ATOMIC_MAX__ 0x7fffffff
#define __SIG_ATOMIC_MIN__ (-__SIG_ATOMIC_MAX__ - 1)
#define __SIG_ATOMIC_TYPE__ int
#define __SIZEOF_DOUBLE__ 8
#define __SIZEOF_FLOAT128__ 16
#define __SIZEOF_FLOAT80__ 16
#define __SIZEOF_FLOAT__ 4
#define __SIZEOF_INT128__ 16
#define __SIZEOF_INT__ 4
#define __SIZEOF_LONG_DOUBLE__ 16
#define __SIZEOF_LONG_LONG__ 8
#define __SIZEOF_LONG__ 8
#define __SIZEOF_POINTER__ 8
#define __SIZEOF_PTRDIFF_T__ 8
#define __SIZEOF_SHORT__ 2
#define __SIZEOF_SIZE_T__ 8
#define __SIZEOF_WCHAR_T__ 4
#define __SIZEOF_WINT_T__ 4
#define __SIZE_MAX__ 0xffffffffffffffffUL
#define __SIZE_TYPE__ long unsigned int
#define __SSE2_MATH__ 1
#define __SSE2__ 1
#define __SSE_MATH__ 1
#define __SSE__ 1
#define __SSP_STRONG__ 3
#define __STDC_HOSTED__ 1
#define __STDC_IEC_559_COMPLEX__ 1
#define __STDC_IEC_559__ 1
#define __STDC_ISO_10646__ 201605L
#define __STDC_NO_THREADS__ 1
#define __STDC_UTF_16__ 1
#define __STDC_UTF_32__ 1
#define __STDC_VERSION__ 201112L
#define __STDC__ 1
#define __UINT16_C(c) c
#define __UINT16_MAX__ 0xffff
#define __UINT16_TYPE__ short unsigned int
#define __UINT32_C(c) c ## U
#define __UINT32_MAX__ 0xffffffffU
#define __UINT32_TYPE__ unsigned int
#define __UINT64_C(c) c ## UL
#define __UINT64_MAX__ 0xffffffffffffffffUL
#define __UINT64_TYPE__ long unsigned int
#define __UINT8_C(c) c
#define __UINT8_MAX__ 0xff
#define __UINT8_TYPE__ unsigned char
#define __UINTMAX_C(c) c ## UL
#define __UINTMAX_MAX__ 0xffffffffffffffffUL
#define __UINTMAX_TYPE__ long unsigned int
#define __UINTPTR_MAX__ 0xffffffffffffffffUL
#define __UINTPTR_TYPE__ long unsigned int
#define __UINT_FAST16_MAX__ 0xffffffffffffffffUL
#define __UINT_FAST16_TYPE__ long unsigned int
#define __UINT_FAST32_MAX__ 0xffffffffffffffffUL
#define __UINT_FAST32_TYPE__ long unsigned int
#define __UINT_FAST64_MAX__ 0xffffffffffffffffUL
#define __UINT_FAST64_TYPE__ long unsigned int
#define __UINT_FAST8_MAX__ 0xff
#define __UINT_FAST8_TYPE__ unsigned char
#define __UINT_LEAST16_MAX__ 0xffff
#define __UINT_LEAST16_TYPE__ short unsigned int
#define __UINT_LEAST32_MAX__ 0xffffffffU
#define __UINT_LEAST32_TYPE__ unsigned int
#define __UINT_LEAST64_MAX__ 0xffffffffffffffffUL
#define __UINT_LEAST64_TYPE__ long unsigned int
#define __UINT_LEAST8_MAX__ 0xff
#define __UINT_LEAST8_TYPE__ unsigned char
#define __USER_LABEL_PREFIX__ 
#define __VERSION__ "6.3.0 20170406"
#define __WCHAR_MAX__ 0x7fffffff
#define __WCHAR_MIN__ (-__WCHAR_MAX__ - 1)
#define __WCHAR_TYPE__ int
#define __WINT_MAX__ 0xffffffffU
#define __WINT_MIN__ 0U
#define __WINT_TYPE__ unsigned int
#define __amd64 1
#define __amd64__ 1
#define __code_model_small__ 1
#define __gnu_linux__ 1
#define __has_include(STR) __has_include__(STR)
#define __has_include_next(STR) __has_include_next__(STR)
#define __k8 1
#define __k8__ 1
#define __linux 1
#define __linux__ 1
#define __pic__ 2
#define __pie__ 2
#define __unix 1
#define __unix__ 1
#define __x86_64 1
#define __x86_64__ 1
#define linux 1
#define unix 1
click below button to copy the code. By - Linux tutorial - team

Related Searches to - linux - linux tutorial - Why does the C preprocessor interpret the word “linux” as the constant “1”?