* wip
* wip
* handle wildcards in override name
* remove wait for ready, add comment about sync, force initial sync complete in test
* address comments
* authorize: add databroker server and record version to result, force sync via polling
* wrap inmem store to take read lock when grabbing databroker versions
* address code review comments
* reset max to 0
* refactor backend, implement encrypted store
* refactor in-memory store
* wip
* wip
* wip
* add syncer test
* fix redis expiry
* fix linting issues
* fix test by skipping non-config records
* fix backoff import
* fix init issues
* fix query
* wait for initial sync before starting directory sync
* add type to SyncLatest
* add more log messages, fix deadlock in in-memory store, always return server version from SyncLatest
* update sync types and tests
* add redis tests
* skip macos in github actions
* add comments to proto
* split getBackend into separate methods
* handle errors in initVersion
* return different error for not found vs other errors in get
* use exponential backoff for redis transaction retry
* rename raw to result
* use context instead of close channel
* store type urls as constants in databroker
* use timestampb instead of ptypes
* fix group merging not waiting
* change locked names
* update GetAll to return latest record version
* add method to grpcutil to get the type url for a protobuf type
- gofumpt everything
- fix TLS MinVersion to be at least 1.2
- add octal syntax
- remove newlines
- fix potential decompression bomb in ecjson
- remove implicit memory aliasing in for loops.
Signed-off-by: Bobby DeSimone <bobbydesimone@gmail.com>
* replace GetAllPages with InitialSync, improve merge performance
* fmt proto
* add test for base64 function
* add sync test
* go mod tidy
Co-authored-by: Bobby DeSimone <bobbydesimone@gmail.com>
* internal/databroker: make Sync send data in smaller batches
GRPC streaming is better at sending multiple smaller message instead of
a big one.
Benchmark result for sending 10k messages at once vs multiple batches,
each with 100 messages:
name old time/op new time/op delta
Sync-12 14.5ms ± 3% 12.4ms ± 2% -14.40% (p=0.000 n=10+9)
* cache: add test for databroker sync
Currently, we're doing "sync" in databroker server. If we're going to
support multiple databroker servers instance, this mechanism won't work.
This commit moves the "sync" to storage backend, by adding new Watch
method. The Watch method will return a channel for the caller. Everytime
something happens inside the storage, we notify the caller by sending a
message to this channel.