document cpuid command line behavior

cpu_info_ is zero for uninitialized state and all bits are off, disabling all cpu optimizations.
the 1 bit indicates cpu_info_ is initialized avoiding calling the detection code again for performance.

MaskCpuFlags initializes the cpu ignoring existing flags, then masks with the supplied flags and stores to cpu_info_.
As a mask, -1 has no effect, enabling all cpu features that were detected, but nothing that wasnt detected.
Setting to 0 will cause the next call to re-initialize the cpu, which is same as enabling all features.
Setting mask to 1 will turn off all cpu features but keep the initialized bit on, so the next detection call wont reinitialize and the cpu features are all disabled.

So normal behavior for command line and programatic masking is:
1 = C
-1 = SIMD

TBR=harryjin@google.com
BUG=libyuv:600
TESTED=out64/Release/bin/run_libyuv_unittest -s libyuv_unittest --verbose --release --gtest_filter=*ARGBExtractAlpha* -a "--libyuv_width=1280 --libyuv_height=720 --libyuv_repeat=9999 --libyuv_flags=1 --libyuv_cpu_info=1"

Review URL: https://codereview.chromium.org/2042933002 .
This commit is contained in:
Frank Barchard 2016-06-08 10:38:09 -07:00
parent 026be3cd85
commit e2611a7349
5 changed files with 8 additions and 7 deletions

View File

@ -1,6 +1,6 @@
Name: libyuv
URL: http://code.google.com/p/libyuv/
Version: 1595
Version: 1596
License: BSD
License File: LICENSE

View File

@ -180,7 +180,7 @@ Running test as benchmark:
Running test with C code:
util/android/test_runner.py gtest -s libyuv_unittest -t 7200 --verbose --release --gtest_filter=* -a "--libyuv_width=1280 --libyuv_height=720 --libyuv_repeat=999 --libyuv_flags=0 --libyuv_cpu_info=0"
util/android/test_runner.py gtest -s libyuv_unittest -t 7200 --verbose --release --gtest_filter=* -a "--libyuv_width=1280 --libyuv_height=720 --libyuv_repeat=999 --libyuv_flags=1 --libyuv_cpu_info=1"
#### Building with GN

View File

@ -62,7 +62,7 @@ static __inline int TestCpuFlag(int test_flag) {
// For testing, allow CPU flags to be disabled.
// ie MaskCpuFlags(~kCpuHasSSSE3) to disable SSSE3.
// MaskCpuFlags(-1) to enable all cpu specific optimizations.
// MaskCpuFlags(0) to disable all cpu specific optimizations.
// MaskCpuFlags(1) to disable all cpu specific optimizations.
LIBYUV_API
void MaskCpuFlags(int enable_flags);

View File

@ -11,6 +11,6 @@
#ifndef INCLUDE_LIBYUV_VERSION_H_ // NOLINT
#define INCLUDE_LIBYUV_VERSION_H_
#define LIBYUV_VERSION 1595
#define LIBYUV_VERSION 1596
#endif // INCLUDE_LIBYUV_VERSION_H_ NOLINT

View File

@ -25,9 +25,10 @@ unsigned int fastrand_seed = 0xfb;
DEFINE_int32(libyuv_width, 0, "width of test image.");
DEFINE_int32(libyuv_height, 0, "height of test image.");
DEFINE_int32(libyuv_repeat, 0, "number of times to repeat test.");
DEFINE_int32(libyuv_flags, 0, "cpu flags for reference code. 0 = C -1 = asm");
DEFINE_int32(libyuv_cpu_info, -1,
"cpu flags for benchmark code. -1 = SIMD, 1 = C");
DEFINE_int32(libyuv_flags, 0,
"cpu flags for reference code. 1 = C, -1 = SIMD");
DEFINE_int32(libyuv_cpu_info, 0,
"cpu flags for benchmark code. 1 = C, -1 = SIMD");
// For quicker unittests, default is 128 x 72. But when benchmarking,
// default to 720p. Allow size to specify.