# HG changeset patch # User mru # Date 1264130711 0 # Node ID 51571e34b76038bc07fd598005dedfa58c00a498 # Parent a65cfe0fe4b221844798122da4c84fadb6abe783 Move array specifiers outside DECLARE_ALIGNED() invocations diff -r a65cfe0fe4b2 -r 51571e34b760 postprocess_altivec_template.c --- a/postprocess_altivec_template.c Sat Jan 16 04:49:02 2010 +0000 +++ b/postprocess_altivec_template.c Fri Jan 22 03:25:11 2010 +0000 @@ -62,7 +62,7 @@ vector by assuming (stride % 16) == 0, unfortunately this is not always true. */ - DECLARE_ALIGNED(16, short, data[8]) = + DECLARE_ALIGNED(16, short, data)[8] = { ((c->nonBQP*c->ppMode.baseDcDiff)>>8) + 1, data[0] * 2 + 1, @@ -222,7 +222,7 @@ const vector signed int zero = vec_splat_s32(0); const int properStride = (stride % 16); const int srcAlign = ((unsigned long)src2 % 16); - DECLARE_ALIGNED(16, short, qp[8]) = {c->QP}; + DECLARE_ALIGNED(16, short, qp)[8] = {c->QP}; vector signed short vqp = vec_ld(0, qp); vector signed short vb0, vb1, vb2, vb3, vb4, vb5, vb6, vb7, vb8, vb9; vector unsigned char vbA0, av_uninit(vbA1), av_uninit(vbA2), av_uninit(vbA3), av_uninit(vbA4), av_uninit(vbA5), av_uninit(vbA6), av_uninit(vbA7), av_uninit(vbA8), vbA9; @@ -418,7 +418,7 @@ */ uint8_t *src2 = src + stride*3; const vector signed int zero = vec_splat_s32(0); - DECLARE_ALIGNED(16, short, qp[8]) = {8*c->QP}; + DECLARE_ALIGNED(16, short, qp)[8] = {8*c->QP}; vector signed short vqp = vec_splat( (vector signed short)vec_ld(0, qp), 0); @@ -538,7 +538,7 @@ src & stride :-( */ uint8_t *srcCopy = src; - DECLARE_ALIGNED(16, uint8_t, dt[16]); + DECLARE_ALIGNED(16, uint8_t, dt)[16]; const vector signed int zero = vec_splat_s32(0); vector unsigned char v_dt; dt[0] = deringThreshold; @@ -602,7 +602,7 @@ v_avg = vec_avg(v_min, v_max); } - DECLARE_ALIGNED(16, signed int, S[8]); + DECLARE_ALIGNED(16, signed int, S)[8]; { const vector unsigned short mask1 = (vector unsigned short) {0x0001, 0x0002, 0x0004, 0x0008, @@ -698,7 +698,7 @@ /* I'm not sure the following is actually faster than straight, unvectorized C code :-( */ - DECLARE_ALIGNED(16, int, tQP2[4]); + DECLARE_ALIGNED(16, int, tQP2)[4]; tQP2[0]= c->QP/2 + 1; vector signed int vQP2 = vec_ld(0, tQP2); vQP2 = vec_splat(vQP2, 0); diff -r a65cfe0fe4b2 -r 51571e34b760 postprocess_internal.h --- a/postprocess_internal.h Sat Jan 16 04:49:02 2010 +0000 +++ b/postprocess_internal.h Fri Jan 22 03:25:11 2010 +0000 @@ -143,8 +143,8 @@ DECLARE_ALIGNED(8, uint64_t, pQPb); DECLARE_ALIGNED(8, uint64_t, pQPb2); - DECLARE_ALIGNED(8, uint64_t, mmxDcOffset[64]); - DECLARE_ALIGNED(8, uint64_t, mmxDcThreshold[64]); + DECLARE_ALIGNED(8, uint64_t, mmxDcOffset)[64]; + DECLARE_ALIGNED(8, uint64_t, mmxDcThreshold)[64]; QP_STORE_T *stdQPTable; ///< used to fix MPEG2 style qscale QP_STORE_T *nonBQPTable; diff -r a65cfe0fe4b2 -r 51571e34b760 postprocess_template.c --- a/postprocess_template.c Sat Jan 16 04:49:02 2010 +0000 +++ b/postprocess_template.c Fri Jan 22 03:25:11 2010 +0000 @@ -3514,7 +3514,7 @@ horizX1Filter(dstBlock-4, stride, QP); else if(mode & H_DEBLOCK){ #if HAVE_ALTIVEC - DECLARE_ALIGNED(16, unsigned char, tempBlock[272]); + DECLARE_ALIGNED(16, unsigned char, tempBlock)[272]; transpose_16x8_char_toPackedAlign_altivec(tempBlock, dstBlock - (4 + 1), stride); const int t=vertClassify_altivec(tempBlock-48, 16, &c);