Skip to content

[SwiftLexicalLookup] Unqualified lookup caching #3068

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

MAJKFL
Copy link
Contributor

@MAJKFL MAJKFL commented Apr 30, 2025

This PR introduces optional caching support to SwiftLexicalLookup. In order to use it, clients can pass an instance of LookupCache as a parameter to the lookup function.

LookupCache keeps track of cache member hits. In order to prevent the cache from taking too much memory, clients can call the LookupCache.evictEntriesWithoutHit function to remove members without a hit and reset the hit property for the remaining members. Calling this function every time after lookup effectively maintains one path from a leaf to the root of the scope tree in cache.

Clients can also optionally set the drop value:

/// Creates a new unqualified lookup cache.
/// `drop` parameter specifies how many eviction calls will be
/// ignored before evicting not-hit members of the cache.
///
/// Example cache eviction sequences (s - skip, e - evict):
/// - `drop = 0` - `e -> e -> e -> e -> e -> ...`
/// - `drop = 1` - `s -> e -> s -> s -> e -> ...`
/// - `drop = 3` - `s -> s -> s -> e -> s -> ...`
///
/// - Note: `drop = 0` effectively maintains exactly one path of cached results to
/// the root in the cache (assuming we evict cache members after each lookup in a sequence of lookups).
/// Higher the `drop` value, more such paths can potentially be stored in the cache at any given moment.
/// Because of that, a higher `drop` value also translates to a higher number of cache-hits,
/// but it might not directly translate to better performance. Because of a larger memory footprint,
/// memory accesses could take longer, slowing down the eviction process. That's why the `drop` value
/// could be fine-tuned to maximize the performance given file size,
/// number of lookups, and amount of available memory.
public init(drop: Int = 0) {
  self.dropMod = drop + 1
}

@MAJKFL
Copy link
Contributor Author

MAJKFL commented Apr 30, 2025

swiftlang/swift#81209

@swift-ci Please test

Copy link
Member

@ahoppen ahoppen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without diving too deeply into the details: I am a little concerned about the cache eviction behavior and the fact that you need to manually call evictEntriesWithoutHit (which incidentally doesn’t seem to be called in this PR or swiftlang/swift#81209) and I think it’s easy for clients to forget to call it. Does this more complex cache eviction policy provide significant benefits over a simple LRU cache that keeps, say 100, cache entries? We could share the LRUCache type that we currently have in SwiftCompilerPluginMessageHandling for that. Curious to hear your opinion.

/// memory accesses could take longer, slowing down the eviction process. That's why the `drop` value
/// could be fine-tuned to maximize the performance given file size,
/// number of lookups, and amount of available memory.
public init(drop: Int = 0) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m not a fan of the drop naming here. I don’t have a better suggestion yet, maybe I’ll come up with one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree it is a bit ambiguous. What about skip?

@MAJKFL
Copy link
Contributor Author

MAJKFL commented May 26, 2025

Without diving too deeply into the details: I am a little concerned about the cache eviction behavior and the fact that you need to manually call evictEntriesWithoutHit (which incidentally doesn’t seem to be called in this PR or swiftlang/swift#81209) and I think it’s easy for clients to forget to call it.

Hi Alex, thank you for the suggestions and sorry for the late reply. I got quite busy with school. Thank you for pointing out evictEntriesWithoutHit is not called in the other PR. Originally, I called the method inside SyntaxProtocol.lookup after performing lookup, but ended up passing eviction to the client for extra flexibility. I must’ve forgotten to put it there. I think there’s enough evidence that it was a bad idea :).

Does this more complex cache eviction policy provide significant benefits over a simple LRU cache that keeps, say 100, cache entries? We could share the LRUCache type that we currently have in SwiftCompilerPluginMessageHandling for that. Curious to hear your opinion.

The current implementation assumes subsequent lookups happen in close proximity to the previous lookup. Like e.g. in the compiler in a single top-bottom scan (best case). The algorithm follows the intuition that for any (close) subsequent lookup, we shouldn’t recompute more than one scope. In top-bottom scan and maintaining one path to the root, we always have a guaranteed cache hit in the first common ancestor. I think a sufficiently big LRU cache would have a similar behavior, but it would require more memory than this approach and not provide additional speedup. I’ve also noticed that growing the cache too big leads to diminishing returns. I suppose it could happen because less of the data structure can remain cached in memory.

I attach below a sketch I used when pitching the idea to @DougGregor that visualizes an optimal top-bottom scan. In each step, blue represents contents of the cache, red represents evicted entries and green arrows point at the lookup position.
Screenshot 2025-05-26 at 10 02 58

I think SwiftLexicalLookup could still benefit from an LRU cache though. The current implementation lacks an ability to arbitrarily lookup previously evaluated names without reevaluating a great part of the syntax tree below. What if we still used the optimal and small cache from the current implementation for subsequent lookups and maintain a large LRU cache for symbols/leaves that would fill up alongside it? This way we would have the best out of two worlds without blowing up the size of LRU with intermediate scope results. What do you think about this idea?

@ahoppen
Copy link
Member

ahoppen commented May 26, 2025

Would it be possible to use an LRU cache and provide an eviction method that can be called to clean up the cache as we know that some parts of it are no longer relevant (what you described in the sketch above). That way we would get reasonable out-of-the-box behavior and don’t have an ever-growing cache but also have the ability to keep the cache size low in cases where the client (here the compiler) cares about it and knows the access patterns.

@MAJKFL
Copy link
Contributor Author

MAJKFL commented May 27, 2025

That way we would get reasonable out-of-the-box behavior and don’t have an ever-growing cache but also have the ability to keep the cache size low in cases where the client (here the compiler) cares about it and knows the access patterns.

Ah yes, that’s a very good idea to have an upper bound for the size of the cache. I haven’t thought about it. I’ll try to look into how to extend LRUCache from SwiftCompilerPluginMessageHandling with the cleanup algorithm then. Should we hoist LRUCache to some other, shared place, or should it remain in SwiftCompilerPluginMessageHandling?

@ahoppen
Copy link
Member

ahoppen commented May 28, 2025

Should we hoist LRUCache to some other, shared place, or should it remain in SwiftCompilerPluginMessageHandling?

We should hoist it up. We could put it into a new module or just stick it in the SwiftSyntax target at the package access level – I haven’t quite decided on that yet but I think it’s something that we could also change easily once the rest of the PR has taken shape.

…ft-syntax module with package level access.
@MAJKFL
Copy link
Contributor Author

MAJKFL commented Jun 17, 2025

swiftlang/swift#81209

@swift-ci Please test

@MAJKFL
Copy link
Contributor Author

MAJKFL commented Jun 18, 2025

swiftlang/swift#81209

@swift-ci Please test Windows Platform

Copy link
Member

@ahoppen ahoppen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing my review comments. I just had a chance to look at the PR again and left a few comments inline.

/// memory accesses could take longer, slowing down the eviction process. That's why the `drop` value
/// could be fine-tuned to maximize the performance given file size,
/// number of lookups, and amount of available memory.
public init(capacity: Int, drop: Int = 0) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just an idea: Would it make sense to move the drop parameter to evictEntriesWithoutHit. That way clients don’t have to think about the dropping cache eviction policy unless they start calling evictEntriesWithoutHit. It would also open up the option to vary the size of the cache dynamically depending on the shape of the code that we’re in (not sure if that’s useful or not). It would also remove the need for bypassDropCounter in that function because you could pass drop: 0 there, I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good idea. I like the idea of not having too much additional state in the cache. I've moved drop to evictEntriesWithoutHit .

/// `nil` if there's no cache entry for the given `id`.
/// Adds `id` and ids of all ancestors to the cache `hits`.
func getCachedAncestorResults(id: SyntaxIdentifier) -> [LookupResult]? {
guard let results = ancestorResultsCache[id] else { return nil }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the user doesn’t call evictEntriesWithoutHit, hits will keep on growing indefinitely. Should we clear up hits periodically for elements that are no longer in the cache? Eg. as a kind of garbage collection if hits.count > capacity * 2. Or should we only keep track of hits if the user opts into it inside the initializer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for noticing that. I think garbage collection makes more sense in this case in combination with drop parameter inside evictEntriesWithoutHit. This way clients won't have to think about the custom eviction policy unless they specifically want to use it.

@@ -698,7 +735,8 @@ import SwiftSyntax
public func lookup(
_ identifier: Identifier?,
at lookUpPosition: AbsolutePosition,
with config: LookupConfig
with config: LookupConfig,
cache: LookupCache?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we default cache to nil to avoid API breakage?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's only used internally and is not exposed to clients since it's in @_spi(Experimental) extension. I've added @_spi(Experimental) to the function now also to avoid confusion. The main entry point of the query in ScopeSyntax.swift does however default cache to nil.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rintaro Could you check if you have any concerns for the changes in LRUCache?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we should eventually want to make a "support" module to host things like this, but I'm fine with moving this to SwiftSyntax for now.

) -> [LookupResult] {
scope?.lookup(identifier, at: self.position, with: config) ?? []
if let cache, let identifier {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that we don’t use the cache if you run lookup without an identifier. Shouldn’t we be able to return the results from the cache in that case without filtering?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the if statement here only checks for the special case where both identifier and cache are not nil at the same time (in which case we need to perform filtering). When the condition fails, at least one of them is nil and we directly return the result of scope?.lookup(identifier, at: self.position, with: config, cache: cache) ?? [].

@@ -33,12 +32,14 @@ public class LRUCache<Key: Hashable, Value> {
private unowned var tail: _Node?

public let capacity: Int
public private(set) var keysInCache: Set<Key>
Copy link
Member

@rintaro rintaro Jun 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As Alex suggests, I'm not sure this addition is needed.

But even if we want this API, I don't think this Set storage is necessary. This is essentially table.keys except Dictionary.Keys is not a Set. I think something like this should be fine.

package var keys: some Collection<Key> {
  table.keys
}

The caller can create a Set from this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think this is a much better idea that avoids redundancy. Thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we should eventually want to make a "support" module to host things like this, but I'm fine with moving this to SwiftSyntax for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants