in following simple code, load 1-channel data texture. use glteximage2d()
gl_luminance
(which 1-channel format) , gl_unsigned_byte
, should take 1 byte per pixel. allocate buffer size equal number of pixels (2 x 2) represents input pixel data (the values of pixels don't matter our purposes).
when run following code address sanitizer enabled, detects heap buffer overflow in call glteximage2d()
, saying tried read beyond bounds of heap-allocated buffer:
#import <opengles/es2/gl.h> //... eaglcontext* context = [[eaglcontext alloc] initwithapi:keaglrenderingapiopengles2]; [eaglcontext setcurrentcontext:context]; glsizei width = 2, height = 2; void *data = malloc(width * height); // contents don't matter glteximage2d(gl_texture_2d, 0, gl_luminance, width, height, 0, gl_luminance, gl_unsigned_byte, data);
this 100% reproducible , happens on both ios simulator , device. if increase size of buffer 6 not overflow (2 bigger expected size of 4).
sizes of 1x1 , 4x4 don't seem have problem, 2x2 , 3x3 do. seems kind of arbitrary.
what wrong?
i have solved @genpfault's comment.
i need set unpack alignment 1:
glpixelstorei(gl_unpack_alignment, 1);
specifically, unpack alignment determines alignment start of each row. default value 4. since rows don't have special alignment, , there no gaps between row bytes, alignment should 1.
the first row aligned because malloc
allocates 16-aligned buffers. second , subsequent rows misaligned default alignment of 4 unless row length multiple of 4 (this explains why 2x2 , 3x3 don't work, 4x4 does). 1x1 happens work because has no second row.