"OpenVINO is a toolkit that boosts deep AI learning to interact with the real world. Now it's more effective!" This incredible news is a call to action for us. The project code has been checked, errors have been detected, and the first part of the article is ready to be read. It's showtime!
On March 6, 2024, Intel has released the open-source solution OpenVINO 2024.0, which has many new enhancements! You can read more about the project here. So, what is OpenVINO? First, it's the tool that aids developers in effectively training AI computer vision.
Computer vision is a technology that helps us identify and interpret certain data in visual images taken from video of sufficient quality. It's hard for human eyes, but camera and trained AI won't let any white rabbit escape.
It's also worth noting that computer vision is really handy, and helps in many fields, for example:
It's not a whole list of options, but that's not the point of this article. Whatever, we can't argue that AI-powered technologies are highly beneficial. Various deep learning tools are being developed to make all these possible. One of such tools is OpenVINO.
To be more precise, OpenVINO is a free ready-to-use toolkit that developers can use to accelerate the development of such solutions. So, as well as the AI training tools, there are also tools for checking the AI effectiveness and its own benchmark. That's about it.
Trying timidly to touch this work of art with my crooked fingers, I pulled the PVS-Studio static analyzer out of holster to dive into the project code and search for some alien entities aka bugs. The result will surprise you.
Even with the bugs, the tool still works as it's supposed to: AI learns and operates well (if we judge by the endless number of dev articles).
You won't encounter such a case in training your AI:
Dev: AI, I can't feel my computer vision!
AI: Meow?!
The commit I built the project on for checking it: 2d8ac08.
The article has no purpose to depreciate the labor of the OpenVINO developers. The aim is to popularize the use of static code analyzers, which can be of great assistance even for high-quality and successful projects.
The checked project is full of typos and issues that occur when developers use copy-paste.
That's why I decided to write a separate article to highlight that such errors can be found almost in any project.
Indeed, the programmer might intend it that way. However, this chance is... very low.
Fragment N1
ov::pass::ConvertPadToGroupConvolution::
ConvertPadToGroupConvolution()
{
....
const auto& pad_begin = pad->get_pads_begin();
const auto& pad_end = pad->get_pads_end();
if (pad_begin.empty() || pad_end.empty())
{
// pads will be empty if inputs are not constants
return false;
}
// check that Pad has non-negative values
auto pred = [](int64_t a)
{
return a < 0;
};
if (std::any_of(pad_begin.begin(), pad_begin.end(), pred) ||
std::any_of(pad_begin.begin(), pad_begin.end(), pred)) // <=
{
return false;
}
....
}
The analyzer warning:
V501 There are identical sub-expressions 'std::any_of(pad_begin.begin(), pad_begin.end(), pred)' to the left and to the right of the '||' operator. convert_pad_to_group_conv.cpp 66
As you can see, the bottom condition compares two identical expressions—the same calls to the std::any_of function with the same parameters. I added the code above for a reason. If you take a look at it, you'll see that the condition should probably have been written as follows:
if (std::any_of(pad_begin.begin(), pad_begin.end(), pred) ||
std::any_of(pad_end.begin(), pad_end.end(), pred))
{
return false;
}
Fragment N2
ov::pass::ShuffleChannelsFusion::
ShuffleChannelsFusion(const bool reshape_constants_check)
{
....
auto reshape_before_constant = std::dynamic_pointer_cast
<ov::op::v0::Constant>(
pattern_map.at(reshape_before_const_pattern).get_node_shared_ptr());
auto reshape_before = std::dynamic_pointer_cast<ov::op::v1::Reshape>(
pattern_map.at(reshape_before_pattern).get_node_shared_ptr());
auto transpose = std::dynamic_pointer_cast<ov::op::v1::Transpose>(
pattern_map.at(transpose_pattern).get_node_shared_ptr());
auto reshape_after = std::dynamic_pointer_cast<ov::op::v1::Reshape>(
pattern_map.at(reshape_after_pattern).get_node_shared_ptr());
auto reshape_after_constant = std::dynamic_pointer_cast<ov::op::v0::Constant>(
pattern_map.at(reshape_after_const_pattern).get_node_shared_ptr());
if (!reshape_after || !transpose || !reshape_after || // <=
!reshape_before_constant || !reshape_after_constant)
{
return false;
}
....
}
The analyzer warning:
V501 There are identical sub-expressions '!reshape_after' to the left and to the right of the '||' operator. shuffle_channels_fusion.cpp 115
Here we go again: there is another error in the condition. The same expression is checked twice: the reshape_after pointer is non-null. If we look at the code above again, we notice the reshape_before initialization. I think the condition should be rewritten as follows:
if (!reshape_after || !transpose || !reshape_before ||
!reshape_before_constant || !reshape_after_constant)
{
return false;
}
Fragment N3
....
using PatternValueMaps = std::vector<PatternValueMap>;
....
PatternValueMaps m_pattern_value_maps;
....
MatcherState::~MatcherState()
{
if (m_restore)
{
if (!m_matcher->m_matched_list.empty())
{
m_matcher->m_matched_list.erase(m_matcher->m_matched_list.begin() +
m_watermark,
m_matcher->m_matched_list.end());
}
if (!m_pattern_value_maps.empty())
{
m_matcher->m_pattern_value_maps.erase(m_pattern_value_maps.begin() + // <=
m_capture_size,
m_pattern_value_maps.end());
}
m_matcher->m_pattern_map = m_pattern_value_map;
}
}
The analyzer warning:
V539 [CERT-CTR53-CPP] Consider inspecting iterators which are being passed as arguments to function 'erase'. matcher.cpp 48
We face a tricky error here. Let's break it down.
First, we see that it's checked whether the m_pattern_value_maps vector is empty.
Then we notice that in the then branch of the second nested if, another m_matcher->m_pattern_value_maps is handled for some reason. From bad to worse.
The std::vector<PatternValueMap>::erase member function of the m_matcher->m_pattern_value_maps container is passed arguments as iterators of another m_pattern_value_maps container. It won't operate correctly.
It follows from the constructor and destructor that the MatcherState class is designed to revert changes to an object of the Matcher type. The RAII wrapper code might have been used to contain the state of the Matcher object in the MatcherState::m_pattern_value_map and MatcherState::m_pattern_value_maps data members and then return it in the destructor.
However, devs rewrote the code to add the MatcherState::m_watermark and MatcherState::m_capture_size data members. Devs are responsible for deleting elements that have been added to the end of the Matcher::m_matched_list and Matcher::m_pattern_value_maps containers.
So, here's what we need to fix the code:
if (!m_matcher->m_pattern_value_maps.empty())
{
m_matcher->m_pattern_value_maps.erase(
m_matcher->m_pattern_value_maps.begin() +
m_capture_size,
m_matcher->m_pattern_value_maps.end() );
}
I'd also like to point out that the MatcherState::m_pattern_value_maps data member is no longer used, so we may delete it.
Fragment N4
template <x64::cpu_isa_t isa>
void jit_power_dynamic_emitter::
emit_isa(const std::vector<size_t> &in_vec_idxs,
const std::vector<size_t> &out_vec_idxs) const
{
....
if (isa == x64::avx512_core || isa == x64::avx512_core) // <=
{
h->sub(h->rsp, n_k_regs_to_save * k_mask_size);
for (size_t i = 0; i < n_k_regs_to_save; ++i)
{
if (x64::mayiuse(x64::avx512_core))
h->kmovq(h->ptr[h->rsp + i * k_mask_size], Opmask(i));
else
h->kmovw(h->ptr[h->rsp + i * k_mask_size], Opmask(i));
}
}
....
}
The analyzer warning:
V501 There are identical sub-expressions 'isa == x64::avx512_core' to the left and to the right of the '||' operator. jit_eltwise_emitters.cpp 705
Here's the same old condition error over and over: two absolutely identical expressions are checked. It seems like isa should have a different value in the second expression.
It's quite funny that the error occurs three more times in the code.
Fragment N5
void GridSampleKernel<isa>::reflectionPadding(const Vmm& vCoordDst,
const Vmm& vCoordOrigin,
const coord dim )
{
....
if (dim == coord::w)
{
....
} else if (coord::h) // <=
{
....
} else {....}
....
}
The analyzer warning:
V768 The enumeration constant 'h' is used as a variable of a Boolean-type. grid_sample.cpp 925
It's bizarre that in the second condition, the coord::h constant is checked. Indeed, it has the value of 1, and such code will always return true—it's clearly an error.
In this case, the code in the body of the last else branch won't be executed. What is it? Is it a tricky developers' idea, or is it an artificial limitation for the executable code? It looks more like a bug. There should be the dim == coord::h expression in the condition.
Here are a few other similar warnings:
The error occurred because the programmer copied the coord::h enumerator into the condition but forgot to compare its value with the value of the dim parameter.
Fragment N6
OutputVector translate_im2col(const NodeContext& context)
{
num_inputs_check(context, 5, 5);
auto input = context.get_input(0);
auto kernel_size = context.const_input<std::vector<int64_t>>(1);
PYTORCH_OP_CONVERSION_CHECK(kernel_size.size() == 2,
"kernel size should contains 2 elements");
auto dilation = context.const_input<std::vector<int64_t>>(2);
PYTORCH_OP_CONVERSION_CHECK(kernel_size.size() == 2, // <=
"dilation should contains 2 elements");
auto padding = context.const_input<std::vector<int64_t>>(3);
PYTORCH_OP_CONVERSION_CHECK(kernel_size.size() == 2, // <=
"padding should contains 2 elements");
auto stride = context.const_input<std::vector<int64_t>>(4);
PYTORCH_OP_CONVERSION_CHECK(kernel_size.size() == 2, // <=
"stride should contains 2 elements");
....
}
The analyzer warning:
V547 Expression 'kernel_size.size() == 2' is always true. im2col.cpp 65
Here are a few more warnings to see the full picture:
At first glance, it's not clear what the analyzer is trying to tell us with all these warnings, but let us speculate.
The first thing that catches your eye is that the kernel_size.size() == 2 is checked four times. The kernel_size vector doesn't change anywhere after the first check. The analyzer suggests that the following three checks are always true.
How did the analyzer figure that out?
The PYTORCH_OP_CONVERSION_CHECK macro has the create function that throws an exception if the expression passed to the macro yields to false. Hence, the kernel_size.size() expression should be equal to 2 for all code to be reachable after the check. By default, we consider it as such.
Next, you might ask, "Why do we even need to check the kernel_size.size() value if the kernel_size vector doesn't change and its size will always be 2?" Everything we looked at earlier was just a result of the mistake, not the reason. The reason is simple.
The kernel_size object was created and initialized, and then the kernel_size.size() == 2 expression was passed to the PYTORCH_OP_CONVERSION_CHECK macro and checked inside. Another dilation object was created, but the kernel_size.size() == 2 expression was passed to the PYTORCH_OP_CONVERSION_CHECK macro as well. Although, from a logical standpoint, the dilation.size() == 2 expression will be passed and checked.
If you pay attention to the second argument as string passed to the macro, then again, it becomes obvious that the size function should be called for the dilation object. And the same is for other two objects, you get it... The programmer copied code lines and forgot to change the objects names for which they passed size checks as a parameter to the macro.
Fragment N7
void BinaryConvolution::createPrimitive()
{
....
bool args_ok = jcp.l_pad <= jcp.ur_w && // <=
(r_pad_no_tail <= jcp.ur_w) && (jcp.l_pad <= jcp.ur_w) && // <=
IMPLICATION(jcp.kw > 7, (jcp.t_pad == 0 && jcp.l_pad == 0) ||
(jcp.stride_w == 1 && jcp.stride_h == 1));
....
}
The analyzer warning:
V501 There are identical sub-expressions 'jcp.l_pad <= jcp.ur_w' to the left and to the right of the '&&' operator. bin_conv.cpp 1088
As we can see, the same expression is checked. Nothing wrong will happen, but one redundant expression is better to delete.
Fragment N8
void FakeQuantize::getSupportedDescriptors()
{
....
if (getInputShapeAtPort(0).getRank() !=
getInputShapeAtPort(0).getRank()) // <=
{
OPENVINO_THROW(errorPrefix,
"has different ranks for input and output tensors");
}
....
}
The analyzer warning:
V501 There are identical sub-expressions 'getInputShapeAtPort(0).getRank()' to the left and to the right of the '!=' operator. fake_quantize.cpp 1301
The condition checks the same subexpression for inequalities. There might be a copy-paste issue here. Since devs directly wrote, "has different ranks for input and output tensors," and we consider that there is a similar function but for the output values as at the following code:
const Shape& getOutputShapeAtPort(size_t port) const
{
if (outputShapes.size() <= port)
{
OPENVINO_THROW("Incorrect output port number for node ", getName());
}
return outputShapes[port];
}
We may fix the code as follows:
if (getInputShapeAtPort(0).getRank() != getOutputShapeAtPort(0).getRank())
{
OPENVINO_THROW(errorPrefix,
"has different ranks for input and output tensors");
}
Fragment N9
void set_state(const ov::SoPtr<ov::ITensor>& state) override
{
OPENVINO_ASSERT(state->get_shape() ==
m_state->get_shape(),
"Wrong tensor shape.");
OPENVINO_ASSERT(state->get_element_type() ==
state->get_element_type(), // <=
"Wrong tensor type." );
OPENVINO_ASSERT(state->get_byte_size() ==
state->get_byte_size(), // <=
"Blob size of tensors are not equal.");
std::memcpy(m_state->data(), state->data(), state->get_byte_size());
}
The analyzer warnings:
The first line compares the state->get_shape() and m_state->get_shape() expressions. However, the following lines compare the results of calling the get_element_type and get_byte_size member functions of the same state object because of copy-paste. Most likely, it happened because the names of m_state and state are similar—the programmer didn't pay attention to it.
Let's fix the code:
....
OPENVINO_ASSERT(state->get_element_type() ==
m_state->get_element_type(),
"Wrong tensor type." );
OPENVINO_ASSERT(state->get_byte_size() ==
m_state->get_byte_size(),
"Blob size of tensors are not equal.");
....
Fragment N10
void SubgraphExtractor::add_new_inputs(const std::vector<
InputEdge>& new_inputs,
const bool merge_inputs )
{
....
auto it = std::find_if(new_inputs.begin(), new_inputs.begin(),
[&](const InputEdge& input_edge)
{
return get_input_tensor_name(m_onnx_graph,
input_edge) == input.first;
} );
....
}
The analyzer warning:
V539 [CERT-CTR53-CPP] Consider inspecting iterators which are being passed as arguments to function 'find_if'. subgraph_extraction.cpp 300
If we pay attention to the std::find_if function call, we notice that the second argument should be a call to new_inputs.end(). In the current state, the code always returns new_inputs.begin().
Let's fix the code:
....
auto it = std::find_if(new_inputs.begin(), new_inputs.end(),
[&](const InputEdge& input_edge)
{
return get_input_tensor_name(
m_onnx_graph, input_edge) == input.first;
});
....
Fragment N11
std::vector<PortConfig> inConfs;
....
MemoryDescPtr Node::getBaseMemDescAtInputPort(size_t portNum) const
{
if (auto primDesc = getSelectedPrimitiveDescriptor())
{
const auto& inConfs = primDesc->getConfig().inConfs;
if (inConfs.size() < portNum) // N1
{
OPENVINO_THROW("Can't get input memory desc at port: ",
portNum, ", incorrect port number" );
}
return inConfs[portNum].getMemDesc(); // N2
}
OPENVINO_THROW("Can't get input memory desc,
primitive descriptor is not selected");
}
The analyzer warning:
V557 [CERT-ARR30-C] Array overrun is possible. The 'portNum' index is pointing beyond array bound. node.cpp 402
It looks like the code is fine, and the analyzer warns for nothing. But no such luck. To make it easier to explain, I've marked the lines that we should focus on.
The line N1 in the condition checks the inConfs.size() < portNum expression. The condition becomes false when portNum <= inConfs.size(). The inConfs container is then accessed in the line N2. The container should be accessed by indices in the range of [0 ... N - 1]. However, in the boundary case, when we have portNum == inConfs.size(), we'll catch a container overrun, leading to undefined behavior.
The fixed check should look like the following:
if (portNum >= inConfs.size()) { .... }
I also switched the operands around because, in my opinion, it's easier for developers to read such a check.
Upon reading about the error, the reader may call on Peter to explain the situation: "There are no typos or copy-paste problems in this example, then why is it here?" The point is that this error has been repeated:
....
std::vector<PortConfig> outConfs;
....
MemoryDescPtr Node::getBaseMemDescAtOutputPort(size_t portNum) const
{
if (auto primDesc = getSelectedPrimitiveDescriptor())
{
const auto& outConfs = primDesc->getConfig().outConfs;
if (outConfs.size() < portNum) // <=
{
OPENVINO_THROW("Can't get output memory desc at port: ",
portNum, ", incorrect port number" );
}
return outConfs[portNum].getMemDesc(); // <=
}
OPENVINO_THROW("Can't get output memory desc,
primitive descriptor is not selected");
}
The analyzer warning:
V557 [CERT-ARR30-C] Array overrun is possible. The 'portNum' index is pointing beyond array bound. node.cpp 413
These two functions are almost the same. It seems like developers just copied one function and then made a few tweaks. They renamed a local variable, used different object data members and changed an exception message. However, the error also migrated to the second function.
Let's fix it:
if (portNum >= outConfs.size()) { .... }
In general, it'd be better to refactor the code—turn these two functions into one common function and then reuse it.
Fragment N12
Now let's play a mini-game: find typos in the OpenVINO project code. Whoever found it deserved kudos! Whoever didn't deserved kudos, too! However, I hope it's obvious for everyone. Here's a great example of why developers need a static analyzer.
template <class Key, class Value>
using caseless_unordered_map = std::unordered_map<Key, Value,
CaselessHash<Key>, CaselessEq<Key>>;
using TypeToNameMap = ov::intel_cpu::
caseless_unordered_map<std::string, Type>;
static const TypeToNameMap& get_type_to_name_tbl() {
static const TypeToNameMap type_to_name_tbl = {
{"Constant", Type::Input},
{"Parameter", Type::Input},
{"Result", Type::Output},
{"Eye", Type::Eye},
{"Convolution", Type::Convolution},
{"GroupConvolution", Type::Convolution},
{"MatMul", Type::MatMul},
{"FullyConnected", Type::FullyConnected},
{"MaxPool", Type::Pooling},
{"AvgPool", Type::Pooling},
{"AdaptiveMaxPool", Type::AdaptivePooling},
{"AdaptiveAvgPool", Type::AdaptivePooling},
{"Add", Type::Eltwise},
{"IsFinite", Type::Eltwise},
{"IsInf", Type::Eltwise},
{"IsNaN", Type::Eltwise},
{"Subtract", Type::Eltwise},
{"Multiply", Type::Eltwise},
{"Divide", Type::Eltwise},
{"SquaredDifference", Type::Eltwise},
{"Maximum", Type::Eltwise},
{"Minimum", Type::Eltwise},
{"Mod", Type::Eltwise},
{"FloorMod", Type::Eltwise},
{"Power", Type::Eltwise},
{"PowerStatic", Type::Eltwise},
{"Equal", Type::Eltwise},
{"NotEqual", Type::Eltwise},
{"Greater", Type::Eltwise},
{"GreaterEqual", Type::Eltwise},
{"Less", Type::Eltwise},
{"LessEqual", Type::Eltwise},
{"LogicalAnd", Type::Eltwise},
{"LogicalOr", Type::Eltwise},
{"LogicalXor", Type::Eltwise},
{"LogicalNot", Type::Eltwise},
{"Relu", Type::Eltwise},
{"LeakyRelu", Type::Eltwise},
{"Gelu", Type::Eltwise},
{"Elu", Type::Eltwise},
{"Tanh", Type::Eltwise},
{"Sigmoid", Type::Eltwise},
{"Abs", Type::Eltwise},
{"Sqrt", Type::Eltwise},
{"Clamp", Type::Eltwise},
{"Exp", Type::Eltwise},
{"SwishCPU", Type::Eltwise},
{"HSwish", Type::Eltwise},
{"Mish", Type::Eltwise},
{"HSigmoid", Type::Eltwise},
{"Round", Type::Eltwise},
{"PRelu", Type::Eltwise},
{"Erf", Type::Eltwise},
{"SoftPlus", Type::Eltwise},
{"SoftSign", Type::Eltwise},
{"Select", Type::Eltwise},
{"Log", Type::Eltwise},
{"BitwiseAnd", Type::Eltwise},
{"BitwiseNot", Type::Eltwise},
{"BitwiseOr", Type::Eltwise},
{"BitwiseXor", Type::Eltwise},
{"Reshape", Type::Reshape},
{"Squeeze", Type::Reshape},
{"Unsqueeze", Type::Reshape},
{"ShapeOf", Type::ShapeOf},
{"NonZero", Type::NonZero},
{"Softmax", Type::Softmax},
{"Reorder", Type::Reorder},
{"BatchToSpace", Type::BatchToSpace},
{"SpaceToBatch", Type::SpaceToBatch},
{"DepthToSpace", Type::DepthToSpace},
{"SpaceToDepth", Type::SpaceToDepth},
{"Roll", Type::Roll},
{"LRN", Type::Lrn},
{"Split", Type::Split},
{"VariadicSplit", Type::Split},
{"Concat", Type::Concatenation},
{"ConvolutionBackpropData", Type::Deconvolution},
{"GroupConvolutionBackpropData", Type::Deconvolution},
{"StridedSlice", Type::StridedSlice},
{"Slice", Type::StridedSlice},
{"Tile", Type::Tile},
{"ROIAlign", Type::ROIAlign},
{"ROIPooling", Type::ROIPooling},
{"PSROIPooling", Type::PSROIPooling},
{"DeformablePSROIPooling", Type::PSROIPooling},
{"Pad", Type::Pad},
{"Transpose", Type::Transpose},
{"LSTMCell", Type::RNNCell},
{"GRUCell", Type::RNNCell},
{"AUGRUCell", Type::RNNCell},
{"RNNCell", Type::RNNCell},
{"LSTMSequence", Type::RNNSeq},
{"GRUSequence", Type::RNNSeq},
{"AUGRUSequence", Type::RNNSeq},
{"RNNSequence", Type::RNNSeq},
{"FakeQuantize", Type::FakeQuantize},
{"BinaryConvolution", Type::BinaryConvolution},
{"DeformableConvolution", Type::DeformableConvolution},
{"TensorIterator", Type::TensorIterator},
{"Loop", Type::TensorIterator},
{"ReadValue", Type::MemoryInput}, // for construction from name
// ctor, arbitrary name is used
{"Assign", Type::MemoryOutput}, // for construction from layer ctor
{"Convert", Type::Convert},
{"NV12toRGB", Type::ColorConvert},
{"NV12toBGR", Type::ColorConvert},
{"I420toRGB", Type::ColorConvert},
{"I420toBGR", Type::ColorConvert},
{"MVN", Type::MVN},
{"NormalizeL2", Type::NormalizeL2},
{"ScatterUpdate", Type::ScatterUpdate},
{"ScatterElementsUpdate", Type::ScatterElementsUpdate},
{"ScatterNDUpdate", Type::ScatterNDUpdate},
{"Interpolate", Type::Interpolate},
{"RandomUniform", Type::RandomUniform},
{"ReduceL1", Type::Reduce},
{"ReduceL2", Type::Reduce},
{"ReduceLogicalAnd", Type::Reduce},
{"ReduceLogicalOr", Type::Reduce},
{"ReduceMax", Type::Reduce},
{"ReduceMean", Type::Reduce},
{"ReduceMin", Type::Reduce},
{"ReduceProd", Type::Reduce},
{"ReduceSum", Type::Reduce},
{"ReduceLogSum", Type::Reduce},
{"ReduceLogSumExp", Type::Reduce},
{"ReduceSumSquare", Type::Reduce},
{"Broadcast", Type::Broadcast},
{"EmbeddingSegmentsSum", Type::EmbeddingSegmentsSum},
{"EmbeddingBagPackedSum", Type::EmbeddingBagPackedSum},
{"EmbeddingBagOffsetsSum", Type::EmbeddingBagOffsetsSum},
{"Gather", Type::Gather},
{"GatherElements", Type::GatherElements},
{"GatherND", Type::GatherND},
{"GridSample", Type::GridSample},
{"OneHot", Type::OneHot},
{"RegionYolo", Type::RegionYolo},
{"ShuffleChannels", Type::ShuffleChannels},
{"DFT", Type::DFT},
{"IDFT", Type::DFT},
{"RDFT", Type::RDFT},
{"IRDFT", Type::RDFT},
{"Abs", Type::Math},
{"Acos", Type::Math},
{"Acosh", Type::Math},
{"Asin", Type::Math},
{"Asinh", Type::Math},
{"Atan", Type::Math},
{"Atanh", Type::Math},
{"Ceil", Type::Math},
{"Ceiling", Type::Math},
{"Cos", Type::Math},
{"Cosh", Type::Math},
{"Floor", Type::Math},
{"HardSigmoid", Type::Math},
{"If", Type::If},
{"Neg", Type::Math},
{"Reciprocal", Type::Math},
{"Selu", Type::Math},
{"Sign", Type::Math},
{"Sin", Type::Math},
{"Sinh", Type::Math},
{"SoftPlus", Type::Math},
{"Softsign", Type::Math},
{"Tan", Type::Math},
{"CTCLoss", Type::CTCLoss},
{"Bucketize", Type::Bucketize},
{"CTCGreedyDecoder", Type::CTCGreedyDecoder},
{"CTCGreedyDecoderSeqLen", Type::CTCGreedyDecoderSeqLen},
{"CumSum", Type::CumSum},
{"DetectionOutput", Type::DetectionOutput},
{"ExperimentalDetectronDetectionOutput",
Type::ExperimentalDetectronDetectionOutput},
{"LogSoftmax", Type::LogSoftmax},
{"TopK", Type::TopK},
{"GatherTree", Type::GatherTree},
{"GRN", Type::GRN},
{"Range", Type::Range},
{"Proposal", Type::Proposal},
{"ReorgYolo", Type::ReorgYolo},
{"ReverseSequence", Type::ReverseSequence},
{"ExperimentalDetectronTopKROIs",
Type::ExperimentalDetectronTopKROIs},
{"ExperimentalDetectronROIFeatureExtractor",
Type::ExperimentalDetectronROIFeatureExtractor},
{"ExperimentalDetectronPriorGridGenerator",
Type::ExperimentalDetectronPriorGridGenerator},
{"ExperimentalDetectronGenerateProposalsSingleImage",
Type::ExperimentalDetectronGenerateProposalsSingleImage},
{"ExtractImagePatches", Type::ExtractImagePatches},
{"GenerateProposals", Type::GenerateProposals},
{"Inverse", Type::Inverse},
{"NonMaxSuppression", Type::NonMaxSuppression},
{"NonMaxSuppressionIEInternal", Type::NonMaxSuppression},
{"NMSRotated", Type::NonMaxSuppression},
{"MatrixNms", Type::MatrixNms},
{"MulticlassNms", Type::MulticlassNms},
{"MulticlassNmsIEInternal", Type::MulticlassNms},
{"Multinomial", Type::Multinomial},
{"Reference", Type::Reference},
{"Subgraph", Type::Subgraph},
{"PriorBox", Type::PriorBox},
{"PriorBoxClustered", Type::PriorBoxClustered},
{"Interaction", Type::Interaction},
{"MHA", Type::MHA},
{"Unique", Type::Unique},
{"Ngram", Type::Ngram},
{"ScaledDotProductAttention", Type::ScaledDotProductAttention},
{"ScaledDotProductAttentionWithKVCache",
Type::ScaledDotProductAttention},
{"PagedAttentionExtension", Type::ScaledDotProductAttention},
{"RoPE", Type::RoPE},
{"GatherCompressed", Type::Gather},
{"CausalMaskPreprocess", Type::CausalMaskPreprocess},
};
return type_to_name_tbl;
}
It's amusing that many devs think it's pointless to look for errors in such code. So, they just scroll down to see an answer right away.
There's also an irony in how, despite weighting up inefficiency and labor costs, developers still monotonously search for errors without using static analyzers.
It's so simple: take the analyzer, run the analysis, look at the list of errors. Profit. No need to waste time looking for bugs—they're already highlighted. Just fix them and that's all.
The analyzer warnings:
Briefly, here are the typos:
static const TypeToNameMap& get_type_to_name_tbl() {
static const TypeToNameMap type_to_name_tbl = {
....,
{"Abs", Type::Eltwise}, // <=
....,
{"SoftPlus", Type::Eltwise}, // <=
....,
{"Abs", Type::Math}, // <=
....,
{"SoftPlus", Type::Math}, // <=
....,
};
return type_to_name_tbl;
}
Clearly, there should not be identical values here, and developers should never use duplicates.
What a fascinating typo adventure we've had!
The first part of the article about checking the OpenVINO project comes to the end. May the force of accuracy and concentration be with you!
We'll send all the bugs to the developers and hope the they will get fixed soon.
Just a quick reminder: the typos and other errors we've spotted only show that programmers are human too. To be happy and to fix errors in code, all they need is a small, efficient, and their own static analyzer.
Traditionally, I'd suggest you try our PVS-Studio analyzer. We have a free license for open-source projects.
Take care of yourself and all the best!