C 中文周刊 第106期
资讯
标准委员会动态/ide/编译器信息放在这里
编译器信息最新动态推荐关注hellogcc公众号 本周更新 2023-03-22 第194期
AMD EPYC Genoa-X CPU 曝光:配备 1.25GB 缓存,预计将于年内推出
3D Vcache服务器,1G多cache,离谱,zen3就很省钱,zen4这波又得扩大一波市场占用率,英特尔慌不慌
文章
- 从表达式左边推导模板参数
class parse {
public:
parse(std::string str) : str_(str) {}
template<typename T>
operator T() && {
if constexpr (std::is_integral_v<T>) {
T t = atoi(str_.data());
return t;
}
else {
return str_;
}
}
private:
std::string str_;
};
std::string str = "42";
int i = parse(str);
assert(i == 42);
std::string s = parse(str);
assert(s == "42");
有趣
- Did you know that C 23 added Monadic operations for std::expected
enum class error { runtime_error };
[[nodiscard]] auto foo() -> std::expected<int, error> {
return std::unexpected(error::runtime_error);
}
int main() {
const auto e = foo();
e.and_then([](const auto& e) -> std::expected<int, error> {
std::cout << int(e); // not printed
return {};
}).or_else([](const auto& e) -> std::expected<int, error> {
std::cout << int(e); // prints 0
return {};
});
}
真不容易,之前就该有的东西
- Effortful Performance Improvements in C
省流,替换unordered_map为tsl/robin_map.h,性能提升明显(2023了大家应该都知道标准库这个map垃圾了)
- The Mystery of The Missing Bytes
省流: 空基类优化
- Counting cycles and instructions on ARM-based Apple systems
苹果M1平台下的perf counter https://gist.github.com/ibireme/173517c208c7dc333ba962c1f0d67d12
例子
代码语言:javascript复制#include "performancecounters/event_counter.h"
event_collector collector;
void f() {
event_aggregate aggregate{};
for (size_t i = 0; i < repeat; i ) {
collector.start();
function(); // benchmark this function
event_count allocate_count = collector.end();
aggregate << allocate_count;
}
}
- Is boyer_moore_horspool faster then std::string::find?
快一点,但不多,一般来说热点也不在这
- 内核优化之 PSI 篇:快速发现性能瓶颈,提高资源利用率
监控服务,发现性能瓶颈,但发现监控服务本身成了瓶颈,修复监控服务的bug,有意思
- sched/numa: Enhance vma scanning
看不懂,但感觉有点意思
- Token Bucket: or how to throttle
实现了个限流桶算法。看个乐,直接贴了
https://github.com/mvorbrodt/blog/blob/master/src/token_bucket.hpp
代码语言:javascript复制#include <atomic>
#include <chrono>
#include <thread>
#include <stdexcept>
// Multi-Threaded Version of Token Bucket
class token_bucket_mt {
public:
using clock = std::chrono::steady_clock;
using duration = clock::duration;
using time_point = clock::time_point;
using atomic_duration = std::atomic<duration>;
using atomic_time_point = std::atomic<time_point>;
token_bucket_mt(std::size_t tokens_per_second, std::size_t token_capacity, bool full = true)
: token_bucket_mt(std::chrono::duration_cast<duration>(std::chrono::seconds(1)) / tokens_per_second, token_capacity, full) {}
token_bucket_mt(duration time_per_token, std::size_t token_capacity, bool full = true)
: m_time{ full ? time_point{} : clock::now() }, m_time_per_token{ time_per_token }, m_time_per_burst{ time_per_token * token_capacity }
{
if (time_per_token.count() <= 0) throw std::invalid_argument("Invalid token rate!");
if (!token_capacity) throw std::invalid_argument("Invalid token capacity!");
}
token_bucket_mt(const token_bucket_mt& other) noexcept
: m_time{ other.m_time.load() },
m_time_per_token{ other.m_time_per_token.load() },
m_time_per_burst{ other.m_time_per_burst.load() } {}
token_bucket_mt& operator = (const token_bucket_mt& other) noexcept {
m_time = other.m_time.load();
m_time_per_token = other.m_time_per_token.load();
m_time_per_burst = other.m_time_per_burst.load();
return *this;
}
void set_rate(std::size_t tokens_per_second) {
set_rate(std::chrono::duration_cast<duration>(std::chrono::seconds(1)) / tokens_per_second);
}
void set_rate(duration time_per_token) {
if (time_per_token.count() <= 0) throw std::invalid_argument("Invalid token rate!");
m_time_per_token = time_per_token;
}
void set_capacity(std::size_t token_capacity) {
if (!token_capacity) throw std::invalid_argument("Invalid token capacity!");
m_time_per_burst = m_time_per_token.load() * token_capacity;
}
void drain() noexcept {
m_time = clock::now();
}
void refill() noexcept {
auto now = clock::now();
m_time = now - m_time_per_burst.load();
}
[[nodiscard]] bool try_consume(std::size_t tokens = 1, duration* time_needed = nullptr) noexcept {
auto now = clock::now();
auto delay = tokens * m_time_per_token.load(std::memory_order_relaxed);
auto min_time = now - m_time_per_burst.load(std::memory_order_relaxed);
auto old_time = m_time.load(std::memory_order_relaxed);
auto new_time = min_time > old_time ? min_time : old_time;
while (true) {
new_time = delay;
if (new_time > now) {
if (time_needed != nullptr)
*time_needed = new_time - now;
return false;
}
if (m_time.compare_exchange_weak(old_time, new_time, std::memory_order_relaxed, std::memory_order_relaxed))
return true;
new_time = old_time;
}
}
void consume(std::size_t tokens = 1) noexcept {
while (!try_consume(tokens))
std::this_thread::yield();
}
void wait(std::size_t tokens = 1) noexcept {
duration time_needed;
while (!try_consume(tokens, &time_needed))
std::this_thread::sleep_for(time_needed);
}
private:
atomic_time_point m_time;
atomic_duration m_time_per_token;
atomic_duration m_time_per_burst;
};
// Default Token Bucket
using token_bucket = token_bucket_mt;
代码语言:javascript复制#include <iostream>
#include <iomanip>
#include <vector>
#include <latch>
#include "token_bucket.hpp"
#define all(c) for(auto& it : c) it
int main() {
using namespace std;
using namespace std::chrono;
try {
auto N = 1;
auto bucket = token_bucket(1ms, 1000, true);
auto count = thread::hardware_concurrency() - 1;
auto run = atomic_bool{ true };
auto total = atomic_uint64_t{};
auto counts = vector<atomic_uint64_t>(count);
auto fair_start = latch(count 2);
auto threads = vector<thread>(count);
thread([&] {
fair_start.arrive_and_wait();
auto start = steady_clock::now();
auto sec = 0;
while (run)
{
auto cnt = 1;
for (auto& count : counts)
cout << fixed << "Cnt " << cnt << ":t" << count << "t / t" << (100.0 * count / total) << " % n";
cout << "Total:t" << total << "nTime:t" << duration_cast<seconds>(steady_clock::now() - start).count() << "sn" << endl;
this_thread::sleep_until(start seconds( sec));
}
}).detach();
auto worker = [&](auto x) {
fair_start.arrive_and_wait();
while (run) {
bucket.consume(N);
total = N;
counts[x] = N;
}
};
thread([&] {
bucket.drain();
fair_start.arrive_and_wait();
this_thread::sleep_for(3s);
bucket.set_rate(1s);
bucket.set_capacity(1000000);
this_thread::sleep_for(3s);
bucket.refill();
}).detach();
all(threads) = thread(worker, --count);
cin.get();
run = false;
all(threads).join();
} catch (exception& ex) {
cerr << ex.what() << endl;
}
}
视频
- C Weekly - Ep 368 - The Power of template-template Parameters: A Basic Guide
看代码
代码语言:javascript复制#include <list>
#include <vector>
template <template <typename T, typename Alloc = std::allocator<T>>
class Container>
[[nodiscard]] constexpr auto transform_into(auto f, const auto &input) noexcept(
noexcept(f(input.front())))
requires requires { f(input.front()); }
{
Container<decltype(f(input.front()))> retval;
retval.reserve(input.size());
for (auto &&value : input) {
retval.push_back(f(value));
}
return retval;
}
int main() {
std::list<int> data{1, 2, 3, 4};
const auto result =
transform_into<std::vector>([](const auto i) { return i * 1.1; }, data);
static_assert(std::is_same_v<decltype(result)::value_type, double>);
}
只能说,这个例子不配这种级别的抽象
- std::function: a deep dive behind the curtain - Andreas Reischuck - Meeting C 2022 开源项目需要人手
- asteria 一个脚本语言,可嵌入,长期找人,希望胖友们帮帮忙,也可以加群384042845和作者对线
- pika 一个nosql 存储, redis over rocksdb,非常需要人贡献代码胖友们, 感兴趣的欢迎加群294254078前来对线
- Unilang deepin的一个通用编程语言,点子有点意思,也缺人,感兴趣的可以github讨论区或者deepin论坛看一看。这里也挂着长期推荐了
- paozhu 国人开发的web库,和drogon联系过没共建而考虑自己的需求基于asio开发。感兴趣的可以体验一下,挂在这里长期推荐了
新项目介绍/版本更新
- StaticAnalysis 一个静态分析的action模版
看到这里或许你有建议或者疑问或者指出错误,请留言评论! 多谢!
本文永久链接