Skip to main content

C-language

Redefining Global in C! 🤔🤯

This post explores how redefinition of a symbol may be possible when using the C language.

Redefining a global variable is not allowed in C. For the most part this statement is right. Compiling the following will fail because a is redefined as a float, after having defined as an int.

#include <stdio.h>

int a;
float a = 1.0;

int main() {
  printf("int? - a = %d\n", a);
  printf("float? - a = %f\n", a);

  return 0;
}
main.c

Compiler throws the following error -

❯ gcc main.c -o main
main.c:4:7: error: redefinition of 'a' with a different type: 'float' vs 'int'
float a = 1.0;
      ^
main.c:3:5: note: previous definition is here
int a;
    ^
main.c:8:32: warning: format specifies type 'double' but the argument has type 'int' [-Wformat]
  printf("float? - a = %f\n", a);
                        ~~    ^
                        %d
1 warning and 1 error generated.
compiler output

The compiler complains about redefinition of a. So, a cannot be redefined? 😏

Try this instead!

Move declaration of a as an int in another .c file. And retain the main.c as below -

int a;
external.c
#include <stdio.h>

float a = 1.0;

int main() {
  printf("int? - a = %d\n", a);
  printf("float? - a = %f\n", a);

  return 0;
}
main.c

Compile both the files.

❯ gcc external.c main.c -o main
main.c:6:30: warning: format specifies type 'int' but the argument has type 'float' [-Wformat]
  printf("int? - a = %d\n", a);
                      ~~     ^
                      %f
1 warning generated.

warning? No error? Yes. It generated the binary. 😱

And it works! 🤯

❯ ./main
int? - a = 0
float? - a = 1.000000
output