OmniSciDB  a5dc49c757
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
SpeculativeTopN.h File Reference

Speculative top N algorithm. More...

#include <cstddef>
#include <cstdint>
#include <memory>
#include <mutex>
#include <stdexcept>
#include <unordered_map>
#include <vector>
+ Include dependency graph for SpeculativeTopN.h:
+ This graph shows which files directly or indirectly include this file:

Go to the source code of this file.

Classes

struct  SpeculativeTopNVal
 
struct  SpeculativeTopNEntry
 
class  SpeculativeTopNMap
 
class  SpeculativeTopNFailed
 
class  SpeculativeTopNBlacklist
 

Namespaces

 Analyzer
 

Functions

bool use_speculative_top_n (const RelAlgExecutionUnit &, const QueryMemoryDescriptor &)
 

Detailed Description

Speculative top N algorithm.

Definition in file SpeculativeTopN.h.

Function Documentation

bool use_speculative_top_n ( const RelAlgExecutionUnit ra_exe_unit,
const QueryMemoryDescriptor query_mem_desc 
)

SpeculativeTopN sort is used when there are multiple already sorted results (when GPU sort is used on multiple devices, refer to GroupByAndAggregate::gpuCanHandleOrderEntries), and we want to pick top n elements (LIMIT caluse exists), and we have already chosen this algorithm when creating the proper work unit (refer to RelAlgExecutor::createSortInputWorkUnit).

Besides, we currently only support cases with 2 target expressions and only with COUNT aggregate (similar limitations exists in whether or not we support GPU sort).

Definition at line 188 of file SpeculativeTopN.cpp.

References SortInfo::algorithm, g_cluster, SortInfo::limit, RelAlgExecutionUnit::sort_info, QueryMemoryDescriptor::sortOnGpu(), SpeculativeTopN, and RelAlgExecutionUnit::target_exprs.

Referenced by Executor::collectAllDeviceResults(), RelAlgExecutor::executeSort(), RelAlgExecutor::executeWorkUnit(), and QueryExecutionContext::launchGpuCode().

189  {
190  if (g_cluster) {
191  return false;
192  }
193  if (ra_exe_unit.target_exprs.size() != 2) {
194  return false;
195  }
196  for (const auto target_expr : ra_exe_unit.target_exprs) {
197  const auto agg_expr = dynamic_cast<const Analyzer::AggExpr*>(target_expr);
198  if (agg_expr && !shared::is_any<kCOUNT, kCOUNT_IF>(agg_expr->get_aggtype())) {
199  return false;
200  }
201  }
202  return query_mem_desc.sortOnGpu() && ra_exe_unit.sort_info.limit &&
204 }
std::vector< Analyzer::Expr * > target_exprs
SortAlgorithm algorithm
std::optional< size_t > limit
bool g_cluster

+ Here is the call graph for this function:

+ Here is the caller graph for this function: