From 64992f170c444021f5cbdd019d17307f5d3f3ecc Mon Sep 17 00:00:00 2001 From: Volker Mische Date: Mon, 12 Aug 2019 23:45:43 +0200 Subject: [PATCH] Move Graphsync A spec from PR into the design history (#159) Graphsync was never implemented this way, but it's worth preserving the ideas behind that design in this repository. --- .../2018.07-graphsync-a.md | 758 ++++++++++++++++++ ...U4LTExZTgtOWZkOS01NzI4NDRhYWJmNTkucG5n.png | Bin 0 -> 29119 bytes 2 files changed, 758 insertions(+) create mode 100644 design/history/exploration-reports/2018.07-graphsync-a.md create mode 100644 design/history/exploration-reports/2018.07-graphsync-a/MTE1NDM5MC80NTg0MTc5OS1lMTFjZGQ4MC1iY2U4LTExZTgtOWZkOS01NzI4NDRhYWJmNTkucG5n.png diff --git a/design/history/exploration-reports/2018.07-graphsync-a.md b/design/history/exploration-reports/2018.07-graphsync-a.md new file mode 100644 index 00000000..e7442afd --- /dev/null +++ b/design/history/exploration-reports/2018.07-graphsync-a.md @@ -0,0 +1,758 @@ +GraphSync +========= + +GraphSync is a protocol and implementation to retrieve a subgraph of a DAG with providing a CID plus some meta information (aka. IPLD Selector) about which parts should be returned. + + +Definitions +----------- + + - **Client**: The peer that sends out a request. + - **Server**: The peer that is receiving a request from a Client and responds to it. + - **Consumer**: The Consumer is between the Client and the Server. It verifies the DAG, filters things and does retries between peers. + - **Selector**: Some identifier (flag) together with some data describing a traversal in the DAG + - **Block**: A CID together with the corresponding binary data + + +General architecture +-------------------- + +GraphSync needs to do a lot of things in the background like verification and error handling when connecting to different peers. Hence there is a difference between the Client-Server interface and the actual wire protocol. + +``` +┌───────────────────┐ ┌──────┐ +│ Local │ │Remote│ +│ │ │ │ +│Client <─> Consumer│ <─> │Server│ +└───────────────────┘ └──────┘ +``` + +GraphSync (both the Consumer and the Server) returns a stream of Blocks. The order of the Blocks is the same as if it was a local traversal. + +The Consumer verifies the Blocks to make sure there are no malicious ones. It might also apply some filters as requested by the Client. + + +### Client + +The Client requests a sub-DAG with a CID and a Selector and receives a stream of Blocks. + +### Consumer + +Some filtering requested by the Selector sent from the Client might not be possible without additional verification, that needs a bigger set of blocks. Hence the Consumer might send a modified Selector to the Server, which fulfills the needs in order to verify the result. + +After the verification and possible additional filtering of the Blocks that were returned by the Server, those Blocks are returned to the Client. + +So far things were simplified to a single Server. In reality the Consumer will connect to several peers. Not all of them might have all the data locally available, that is needed to fulfill the request. The Consumer deals with all the challenges related to connecting to several peers and possible errors. The Client will only receive an error if it can't be resolved by the Consumer. + + +### Server + +The Selectors the Server understands is a subset from those the Consumer can process. A Server might only contain a subset of the data that is needed to fulfill the request. If that's the case, then an error message is returned which contains the CID of the Block that is missing as well as all Blocks that are needed for further verification. + + +Peers with subsets only +----------------------- + +This section describes how the Consumer is dealing with peers that only contain a subset of the data that is needed in order to fulfill the request. It is not about error handling for cases like connectivity issues. + +The following pseudo code describes a possible algorithm: + +``` +const peers = + +const doGraphSync = function* (peers, cid, selector) { + const peer = peers.nextPeer() + const messages = peer.graphSync(cid, selector) + + for (const message of messages) { + if (message.isBlock()) { + yield { + cid: message.cid, + block: message.block + } + } + // Server has only a subset of the requested DAG + else if (message.isNotFound()) { + // Error if none of the peers has the CID + + yield doGraphSync(peers, message.cid, selector) + } + } +} + +const blocks = doGraphSync(peers, , ) +for (const block in blocks) { + // Do something with the blocks +} +``` + +A peer gets are request with a certain CID and Selector. As long as that peer contains all the data that is needed to fulfill the request, a stream of Blocks is returned. + +If the peer doesn't have a certain Block, it returns an error and the Consumer will request that Block together with the Selector again from another that can hopefully fulfill the request. + +In the error case, it is not enough to return the CID of only the Block that wasn't available. The error also needs to contain more context in order to resume the traversal on another peer. Here's an example to make it clearer: + + +``` + ┏━┳━┱─┐ + ┃█┃█┃ │ + ┗━┻━┹─┘ + ┌───────────────┘ │ └──────────────┐ + ┏━┳━┳━┓ ┏━┳━┳━┱─┐ ┌─┬─┐ + ┃█┃█┃█┃ ┃█┃█┃╳┃ │ │ │ │ + ┗━┻━┻━┛ ┗━┻━┻━┹─┘ └─┴─┘ + ┌──┘ │ └──┐ ┌───┘┌┘ └┐└────┐ ┌─┘ └┐ +┏━┳━┓┏━┳━┓┏━┳━┓ ┏━┳━┓┏━┳━┓┌─┬─┐┌─┬─┐ ┌─┬─┐┌─┬─┐ +┃█┃█┃┃█┃█┃┃█┃█┃ ┃█┃█┃┃█┃█┃│ │ ││ │ │ │ │ ││ │ │ +┗━┻━┛┗━┻━┛┗━┻━┛ ┗━┻━┛┗━┻━┛└─┴─┘└─┴─┘ └─┴─┘└─┴─┘ + +``` + +The filled nodes are the one that can't be found on the requested peer. It will return the nodes up to the one marked as `╳`. If it would just return that CID as error, the consumer would have no knowledge of its siblings or its parent. Hence the error also needs to contain the full path to the root. The Consumer can then use that context for resuming the traversal correctly on another peer. + +If none of the peers contain a certain block, an Error to the Client is returned. + + +UnixFS v1 as example +-------------------- + +Using UnixFS v1 as an example for a Selector making things more concrete. + + +### Client/Consumer interface + +Those are the fields of the Selector: + + - Byte offset: to seek in a file + - Payload length: to get the file up to certain byte position, e.g. for buffering + - Max depth: the maximum depth of the traversal (e.g. to get directories only n levels deep) + - Path: get the subtree of a specific path (e.g. a file in a directory) + - Type: e.g. "File" or "Directory" + - Payload nodes: (Boolean) whether to return only nodes containing data or not + + +### Consumer/Server interface + +The Consumer needs to make sure that it doesn't forward malicious nodes to the Client. For the `Byte offset` we need to get the whole subtree from offset 0 to the requested on in order to verify it. Those additional nodes will not be forwarded to the Client. + +Most of the fields from the Selector the Client provided are used. An exception is the `Type` and `Payload nodes`. Even nodes not conforming those filters need to be returned by the Server for verification. The Consumer will then apply those filters in order to return only the requested nodes to the Client. + + +Misc +---- + +### Ideas for Selectors + +If a Selector can operate over several different Multicodec types (UnixFS v1 is always only `dag-pb`), it makes sense to be able to filter on it. The use case is a call to GraphSync where the initiator can't parse all kinds of Blocks, but only certain ones. + + +### Better name + +The name "GraphSync" is catchy, but it doesn't really describe what it is about. Please let us know if you have an idea for a better name. + + + +### Credits/thanks + +A huge thanks to @b5 and @mib-kd743naq for nailing a lot of nitty-gritty details down during the GraphSync Deep-Dive at the [IPFS Developer Meetings 2018] in Berlin. Also thanks @jbenet and @stebalien for finding an agreement on key things very quickly. + +[IPFS Developer Meetings 2018]: https://github.com/ipfs/developer-meetings + + + +Comments on this PR +------------------- + +This document was originally [PR-66]. It spawned a lively discussion. This discussion is preserved here for completeness. + + +### #66: Proposal: GraphSync (A) (open) +Opened 2018-07-16T12:59:50Z by vmx + +These are the current thoughts about GraphSync written down in a +single document. + +This also contains the results from the Deep-Dive session at the +Developer Meeting 2018 in Berlin. + +This document should be seen as a starting point, not as a complete, ready to merge thing. + +/cc @b5 @diasdavid @jbenet @mib-kd743naq @Stebalien + +--- +#### (2018-07-20T23:32:50Z) Stebalien: +I'm having trouble getting a picture of the protocol from this document, even as a starting point. + +I'm seeing: + +``` +User -> GraphSync <-> GraphSync <-> Something? Server? Per selector? +^ | +| v +"Consumer" for selector type Y +``` + +Where the "Consumer" for selector type Y "executes" selectors of type Y, puppeting the GraphSync "client". Is that correct? If so, I'd like to be careful to avoid putting *too* much logic in the "Consumer" as we don't want implementing new selectors to be hard. + +--- +#### (2018-07-23T12:13:16Z) vmx: +I think you are correct. The point of the Consumer is that the GraphSync part on the Server can be pretty minimal. Implementing new selectors would then mostly happen in the Consumer as the Server has already the basic building blocks implemented. + +--- +#### (2018-08-03T06:50:57Z) vmx: +/cc @ajbouh + +--- +#### (2018-09-19T18:49:22Z) whyrusleeping: +@vmx do you have plans around wire format changes? + +Also, any thoughts towards real world performance of such algorithms? A lot goes into making bitswap both fast, and not wasteful. The duplicate blocks issue is pretty significant, and worth designing solutions that take it into account. For example, in the happy case, we can ask one person for the data, they can tell us what they don't have, and we can then ask others for that data. But that relies on us trusting that the other peer will be honest, and fast. + +--- +#### (2018-09-20T09:42:34Z) vmx: +@whyrusleeping Currently GraphSync is becoming more of a RPC cal thing, not a real Bitswap replacements. Perhaps GraphSync could then be used as building block. While implementing what *I think* GraphSync is, I get more and more doubts that it is useful. + +--- +#### (2018-09-20T18:26:20Z) whyrusleeping: +@vmx don't get me wrong, I think GraphSync (in some form) will be incredibly useful. The hard part is just figuring out what that looks like. + +I've been grappling with the latency vs bandwidth waste vs centralization tradeoffs lately, and its tough. Some tools that i'm thinking might be useful: + +- A 'would you send me this?' flag in wantlist entries that tells the other side only to send back an indication of whether they can provide the content. Alternatively, we could just send single 'findProvider' rpcs to each of the peers (not do a full DHT crawl). +- (as you also suggest) A 'Nack' response. Where if you ask a peer for some content, and they don't have it, they return a 'Nack' or 'ErrNotFound'. This should be optional, and specified in the wantlist, but it saves us from having to send out cancels, and also allows us to more effectively distribute and schedule requests across our peers. The downside is that a mischievous node may just refuse to send the Nack, messing up our accounting. +- Provider hints. Our bitswap peers could send us information about other peers that might be beneficial for us to send requests to. + +--- +#### (2018-09-20T19:52:48Z) b5: +Jumping in again to wave hands about _graph manifests_. I brought this up in cursory notion at this session, but have had some time to marinate, and think it's a concept worth revisiting. + +For every discrete DAG _g_ one can construct a _manifest_ which is a second DAG of only block names and links (no content): + +![graph_manifest](2018.07-graphsync-a/MTE1NDM5MC80NTg0MTc5OS1lMTFjZGQ4MC1iY2U4LTExZTgtOWZkOS01NzI4NDRhYWJmNTkucG5n.png) + +These manifests are relatively small. If expressed as a set of two lists (one of array-positional links and one of names/hashes) it should be possible to represent many gigs worth of IPFS DAG content in < 100kb of CBOR. + +IMHO, the power of IPFS is derived from the dual expression of blocks as both graphs and flat lists. This is also a fault line that shows up in the seam between bitswap and graph sync. I think graph manifests are a missing "primitive" from IPFS. + +These manifests have a few properties that are nice: +* deterministic: a properly designed algo for generating manifests will generate the same manifest when given the same graph. Hash it, pass it to your friends. If the graph you're generating a manifest is immutable, manifest and hash of manifest are also immutable +* one can generate a manifest of any subgraph +* if implemented as a protocol, manifest generation can act as a challenge. You don't know if I've already generated this manifest ahead of time, and am simply asking you to compute it for trust purposes +* When I _don't_ have a manifest, I can ask multiple peers for the manifest of the same graph root. Differing responses raise suspicion +* _sooooooo cachable_, manifests could themselves be designed/required to fit into a single IPFS block. + +If I'm planning on efficiently planning my requests for blocks, I _really_ want this manifest as soon as possible. Once I have a manifest I can trust I know a shit tonne of important things: +* when I'm done (progress) +* what blocks to ask for +* how blocks are related + +So this might be a graph-sync thing, but it could also be a structural outgrowth of a bitswap session: establish a trusted graph, then divy up block requests among the session. If block sizes are also in the manifest, one can match larger blocks to faster peers. The point being, a manifest gives me a primitive to plan my block requests, and makes optimizing request planning a matter of better matching + +Downsides: +* you need the entire graph to calculate a manifest, or at least a trusted list of names and links (you may be able to use manifests to generate other manifests... a story for another day) +* graphs aren't super trivial to calculate, I could trick others into doing work they don't want to do if not rate limited or something. it's worth noting that calculating a manifest should be as cheap or cheaper as than through the block graph (cheaper if I can avoid loading associated data). + +Both of those downsides can be mitigated by implementing manifests as a protocol, where peers can dynamically generate manifests of arbitrary graphs & subgraphs, which is the only reason I think it should exist at the IPFS layer. + +Adding in Graph manifests is kinda like turning IPFS into dynamic bittorrent 🤷‍♂️. + +--- +#### (2018-09-21T09:56:56Z) vmx: +I wrote this yesterday, before the two new comments from @whyrusleeping and @b5. I just keep it like that and post a follow-up comment on how this all relates to each other. + +Definitions +----------- +- **Client**: The peer that sends out a request. +- **Server**: The peer that is receiving a request from a Client and responds to it. +- **Node**: An item within the the DAG + + +Intro +----- + +I finally took the time to [code](https://github.com/vmx/js-transsub) what I had in mind (based on this PR). After tackling a "give the full sub-DAG", I wanted to tackle an obvious candidate for GraphSync: UnixFS v1. + +I then got deep into a rabbit hole. I thought I'll just execute the UnixFS Engine code on the Server, so I don't have to re-implement that. It would then return all the Nodes it's visiting, which will then be the ones that are needed in order to perform the same query on the Client. + +It turned out that such a RPC like call isn't really useful. It won't serve the purpose of being something that is a better Bitswap. If you'd have a subset of that Graph already, you'd still get a lot of Nodes you won't actually need. I came to that realisation after reading [@whyrusleeping's comment](https://github.com/ipld/specs/pull/66#issuecomment-422916992) (thanks!). + +I then thought I need to go back to the drawing board and talk to lots of people with more knowledge as I really hit a wall and need to start from scratch. + + +A better way +------------ + +Suddenly I had my own ideas and after a bit of thinking, I think I found a way to move forward which aligns with the stuff I already have. Make GraphSync less powerful than I intended and let the application layer deal with it. GraphSync will only support getting a full sub-DAG combined with a maximum depth. So if you want to get a single Node, you just have a maximum depth of 1. + +Let me use UnixFS v1 as an example on how this is still powerful enough. + +### Getting a full file + +The easiest case is if you request the full contents of a file. It's just the full sub-DAG of a specific path without any depth limitation. + +### Getting only the first few bytes of a file + +You wouldn't want to transfer all Nodes of the file as only a small part is needed. For such a traversal you would need to keep track of the sizes of the Nodes that were already transmitted. That's a lot of logic and out of scope for GraphSync. + +Instead UnixFs needs a bit more logic. It could fall back to how things currently work with Bitswap and request one block after another. Or it could be smarter and e.g. request all children of a certain Node. This would be a request with a maximum depth of 2. It could then inspect those nodes and do subsequent requests, e.g. for full sub-DAGs from some Nodes without an maximum depth limitation. + +### Getting a slice of a file + +This case is about getting only a few bytes combined with a certain offset. It works similarly as the case above, which is without the offset. + +### Getting another slice of the same file + +So far the cases would've work just well with the way described in the intro, doing a UnixFS traversal on the Server and transmitting all visited Nodes. + +But this case is more interesting. If you want a slice of a file you previously got another slice from, it could be that you already have some of the Nodes stored locally. It would be a waste to request all those again from the Server. + +The current system handles traversals where some Nodes are missing well, thanks to Bitswap it will get those missing Nodes from the network. GraphSync can't be used in such a transparent way as more context is needed (you could use GraphSync like Bitswap with requesting always with a maximum depth of 1, but that wouldn't improve anything). The traversal would signal that the requested Node is not available locally and then you can decide what to do. It could be that you request the full sub-DAG, or perhaps only the direct children. It's up to the current context and traversal that is going on, what is best suited. + +If such a signal for a missing Node is provided by the traversal, it can be re-used for partial GraphSync replies. If you request a full sub-DAG it could well be that the Server has only a subset of the data. The logic already in place could then deal with such conditions. + + +Outro +----- + +There's still a lot of open questions around how to process those incoming Nodes from a GraphSync request, but at the moment I think those are just implementation details that can be solved. + +--- +#### (2018-09-21T10:17:48Z) vmx: +@whyrusleeping I fully agree that the hard part is what GraphSync should look like. That's exactly what I struggle with. + +My "better way" addresses the "NACK response" part. It could be extended to a "do you have the data?" request, although I guess if a peer has the data, we would want it anyway, so having a "NACK response" would be enough. + +Or a "would you send me this?" could also be combined with @b5's Graphs Manifests and would not only reply with information about a single block, but with the whole sub-DAG this block links to. + +Provider Hints could be the Graphs Manifests. + +@b5 Thanks for the detailed information on the Graph Manifests. I can see how those could help to optimise the things I described in my "better way". + +--- +#### (2018-09-24T18:00:33Z) mikeal: +> Graph Manifest + +Something related that I've been thinking about is creating an abstraction above a Block Store that stores metatdata about whether or not the store contains the entire graph linked to in the block. + +This need came up in a proof-of-concept I wrote for "pushing" a graph called [graph-push](https://github.com/mikeal/graph-push). Essentially, it exposed both a "shallow" and "deep" push based on whether or not the service has a block. Pushing this decision to the client was highly problematic, it means the client would have to choose between being either fast/efficient or reliable. + +* Block Store: Unsorted key/value store. +* Stores block data indexed by multihash. +* Graph Store: Boolean CID index on top of Block Store. +* `true`/`false` value for CID's. `true` if underlying block store contains **all** the blocks referenced in the CID's graph. + +The reason I bring this up is, I don't see how a singular manifest scales well for very large graphs. It means that you either keep a static representation of the graph index for every CID or you do a fairly expensive query over a simpler index every time you generate the manifest. The manifest could also be incredibly large which leads me to think about all kinds of performance concerns. You can image solving these issues with depth definitions and options but this starts to get very complicated very fast and is always going to have cases that make any solution more or less optimized (deep vs. shallow graphs for instance). + +It may be more flexible to simple be able to say "I contain **all** the blocks in the graph for this CID" or "I don't know how much of this graph I have." The client should be able to figure out the best way to prioritize getting the graph based on this information. It can traverse down the graph with a peer that has *some* of the data until it hits a block that peer doesn't have. As it makes its way down the graph and has to find new peers in a very large graph it will see more peers that have the entire graph and can prioritize those peers. + +--- +#### (2018-09-25T18:33:50Z) b5: +> The reason I bring this up is, I don't see how a singular manifest scales well for very large graphs. + +That's a really good question IMHO: how much could a graph manifest practically hold? If it's not enough info, then it's a bad design choice. Given that @vmx's _better way_ might be able to make use of these manifests, I've coded up a quick spike implementation to get a feel & see if this is worth discussing further: + +### Example Code + +https://github.com/qri-io/go-ipld-manifest + +There's a test in there that runs some _extremely_ rough numbers of a 4-tiered Dag, where the first three tiers are small "link-only nodes" and the bottom ~3k nodes are all 256kb blocks. running that test with `go test -v`: + +``` +manifest representing 4043 nodes and 1.024210Gb of content is 253.921997kb as CBOR +``` + +So based on this *very rough* example, you could get around 1 Gig of content represented in a single manifest if stored as CBOR. I'm assuming a manifest should fit in a single block for caching purposes, but that may not necessarily be true. To keep the example "real" (lol) I've added in a list of block sizes to the manifest. Weather that's acceptable is, well, a question for y'all. It's worth noting this total-storable figure will drop with the switch to base32 cids. + +> Pushing this decision to the client was highly problematic, it means the client would have to choose between being either fast/efficient or reliable. + +I'm assuming we're operating in a peer-2-peer environment, and having trouble seeing how me (as a peer) having a list of all the blocks I need before I go get them _isn't_ worth the trouble. I'm guessing there's details & a good war story here that I'm having trouble getting to b/c of the client / server terminology. As far as I understand, we're trying to figure out a _protocol and implementation to retrieve a subgraph of a DAG with providing a CID plus some meta information_, which clearly has a connection to bitswap, the question is where to draw lines between those APIs, and what API GraphSync should expose (which I fully trust @vmx will handle ;) ). I don't think graph manifests solve this problem. I'm proposing manifests are a missing building block in that process, and that there are other use cases for a graph manifest outside of graph sync (the big one being a proper progress indicator). + +> It means that you either keep a static representation of the graph index for every CID or you do a fairly expensive query over a simpler index every time you generate the manifest. + +There's a third option: only keeping manifests of important CIDs. In the common use cases that means root hashes. No need to keep a manifest of every CID, but being able to generate a manifest of any graph is a useful property. Manifests of immutable content are also immutable, so caching here is a win, but not vital. Being able to generate manifests as protocol level would alleviate the need for users to see this stuff, and open the door to future work with subgraph manifests. + +The code example provided isn't usable as a measurement of performance b/c it's not doing any real node resolving. If network is involved, yes this will be a _very_ expensive operation that should be avoided entirely IMHO. (@mikeal here I think we're in agreement that a peer either having full graph or not is a vital piece of info for decision making). + +If the peer has the full graph locally, calculating a manifest should be cheap. How cheap depends on plumbing I'm not super familiar with. Performance could indeed be a reason for not using the concept of a manifest at all, but to me if we can't generate a fast manifest of a complete graph we have locally, something is wrong. + +> It may be more flexible to simple be able to say "I contain all the blocks in the graph for this CID" or "I don't know how much of this graph I have." The client should be able to figure out the best way to prioritize getting the graph based on this information. It can traverse down the graph with a peer that has some of the data until it hits a block that peer doesn't have. As it makes its way down the graph and has to find new peers in a very large graph it will see more peers that have the entire graph and can prioritize those peers. + +I have two concerns here: +* This conversation is happening over the network. Network is expensive. +* The logic that drives this is IMHO, really hard when you put multiple peers speaking concurrently into the mix. + +To me the goal of a graph manifest is to get the client/requesting peer out of an information deficit as early as possible in the graph-sync process, allowing the requester to perform coordination duties, and to be able to concoct different strategies for delegating requests to peers in parallel. To me those "coordination duties" are where the graph sync work starts. If others can benefit from having manifests (I know we would), then I think it's a candidate for pushing lower into the stack. + +> Graph Store: Boolean CID index on top of Block Store. + +To me this is, like, super solid, which I interpret as part of the "just store your graph information in a graph database" school of thought. This has been suggested elsewhere (I think @lgierth is one of it's proponents). A graph database / index does sound smarter than one-off manifests, but I think even in that context they can work in tandem: generate a manifest from the graph DB so the requester can update it's knowledge of the merkle forest. Sounds like a lot of planning work that's above my pay grade ;). + +--- +#### (2018-09-25T18:46:37Z) whyrusleeping: +@b5 for standard 'wide' graphs, what is the advantage of the graph manifest over simply doing a breadth first search over the dag? + +--- +#### (2018-09-25T18:48:52Z) ajbouh: +Looks like awesome work! + +Datasets like ImageNet have 10^6 entries (image files) in a single +directory. IPFS really falls down when trying to handle scenarios like +this. In the abstract, a manifest sounds like a good solution. Though it +certainly won't fit in a single block! + +On Tue, Sep 25, 2018, 11:33 b5 wrote: + +> The reason I bring this up is, I don't see how a singular manifest scales +> well for very large graphs. +> +> That's a really good question IMHO: how much could a graph manifest +> practically hold? If it's not enough info, then it's a bad design choice. +> Given that @vmx 's *better way* might be able to +> make use of these manifests, I've coded up a quick spike implementation to +> get a feel & see if this is worth discussing further: +> Example Code +> +> https://github.com/qri-io/go-ipld-manifest +> +> There's a test in there that runs some *extremely* rough numbers of a +> 4-tiered Dag, where the first three tiers are small "link-only nodes" and +> the bottom ~3k nodes are all 256kb blocks. running that test with go test +> -v: +> +> manifest representing 4043 nodes and 1.024210Gb of content is 253.921997kb as CBOR +> +> So based on this *very rough* example, you could get around 1 Gig of +> content represented in a single manifest if stored as CBOR. I'm assuming a +> manifest should fit in a single block for caching purposes, but that may +> not necessarily be true. To keep the example "real" (lol) I've added in a +> list of block sizes to the manifest. Weather that's acceptable is, well, a +> question for y'all. It's worth noting this total-storable figure will drop +> with the switch to base32 cids. +> +> Pushing this decision to the client was highly problematic, it means the +> client would have to choose between being either fast/efficient or reliable. +> +> I'm assuming we're operating in a peer-2-peer environment, and having +> trouble seeing how me (as a peer) having a list of all the blocks I need +> before I go get them *isn't* worth the trouble. I'm guessing there's +> details & a good war story here that I'm having trouble getting to b/c of +> the client / server terminology. As far as I understand, we're trying to +> figure out a *protocol and implementation to retrieve a subgraph of a DAG +> with providing a CID plus some meta information*, which clearly has a +> connection to bitswap, the question is where to draw lines between those +> APIs, and what API GraphSync should expose (which I fully trust @vmx +> will handle ;) ). I don't think graph manifests +> solve this problem. I'm proposing manifests are a missing building block in +> that process, and that there are other use cases for a graph manifest +> outside of graph sync (the big one being a proper progress indicator). +> +> It means that you either keep a static representation of the graph index +> for every CID or you do a fairly expensive query over a simpler index every +> time you generate the manifest. +> +> There's a third option: only keeping manifests of important CIDs. In the +> common use cases that means root hashes. No need to keep a manifest of +> every CID, but being able to generate a manifest of any graph is a useful +> property. Manifests of immutable content are also immutable, so caching +> here is a win, but not vital. Being able to generate manifests as protocol +> level would alleviate the need for users to see this stuff, and open the +> door to future work with subgraph manifests. +> +> The code example provided isn't usable as a measurement of performance b/c +> it's not doing any real node resolving. If network is involved, yes this +> will be a *very* expensive operation that should be avoided entirely +> IMHO. (@mikeal here I think we're in +> agreement that a peer either having full graph or not is a vital piece of +> info for decision making). +> +> If the peer has the full graph locally, calculating a manifest should be +> cheap. How cheap depends on plumbing I'm not super familiar with. +> Performance could indeed be a reason for not using the concept of a +> manifest at all, but to me if we can't generate a fast manifest of a +> complete graph we have locally, something is wrong. +> +> It may be more flexible to simple be able to say "I contain all the blocks +> in the graph for this CID" or "I don't know how much of this graph I have." +> The client should be able to figure out the best way to prioritize getting +> the graph based on this information. It can traverse down the graph with a +> peer that has some of the data until it hits a block that peer doesn't +> have. As it makes its way down the graph and has to find new peers in a +> very large graph it will see more peers that have the entire graph and can +> prioritize those peers. +> +> I have two concerns here: +> +> - This conversation is happening over the network. Network is +> expensive. +> - The logic that drives this is IMHO, really hard when you put +> multiple peers speaking concurrently into the mix. +> +> To me the goal of a graph manifest is to get the client/requesting peer +> out of an information deficit as early as possible in the graph-sync +> process, allowing the requester to perform coordination duties, and to be +> able to concoct different strategies for delegating requests to peers in +> parallel. To me those "coordination duties" are where the graph sync work +> starts. If others can benefit from having manifests (I know we would), then +> I think it's a candidate for pushing lower into the stack. +> +> Graph Store: Boolean CID index on top of Block Store. +> +> To me this is, like, super solid, which I interpret as part of the "just +> store your graph information in a graph database" school of thought. This +> has been suggested elsewhere (I think @lgierth +> is one of it's proponents). A graph database +> / index does sound smarter than one-off manifests, but I think even in that +> context they can work in tandem: generate a manifest from the graph DB so +> the requester can update it's knowledge of the merkle forest. Sounds like a +> lot of planning work that's above my pay grade ;). +> +> — +> You are receiving this because you were mentioned. +> Reply to this email directly, view it on GitHub +> , or mute +> the thread +> +> . +> + +--- +#### (2018-09-25T18:53:44Z) b5: +> what is the advantage of the graph manifest over simply doing a breadth first search over the dag? + +locally or over the network? Locally the advantage is very little if any. To me the advantage shows up over the network, giving a requesting peer a small payload of trustable knowledge of what they're after. I think they'd make a great extension when kicking off a bitswap session. For any DAG with less than some threshold of blocks, a manifest would be overkill, and should be skipped. + +--- +#### (2018-09-25T18:57:46Z) whyrusleeping: +@b5 i'm talking about over the network. Say i'm fetching a really large file, If i use a selector to fetch the first three layers of the graph, it should give quite a few hashes to request further, in a trustable way, without being too much data. + +--- +#### (2018-09-25T18:59:06Z) whyrusleeping: +Also potentially relevant for some, an issue I wrote up on selectors a while back: https://github.com/ipfs/notes/issues/272#issue-271301069 + +--- +#### (2018-09-25T19:22:16Z) b5: +@whyrusleeping using the first example from your selector thoughts: +``` +{/a/b/c/d} + +Returns the object referenced by d (single object) at the path /a/b/c below H, as well as the merkle proof to H. +``` +One approach would be to optionally return a manifest of H, or at least the hash-of-manifest-of H if the peer has a manifest on offer. Peer could elect to not compute a manifest for a number of reasons, so it should be optional. In this context, the manifest is the "quite a few hashes to request further" without being too much data. It's "trustable" in the DHT sense, where manifests should probs be vetted against multiple responding peers or something. + +If you _do_ end up with a trustable manifest, you can now construct selector-like queries locally & just ask for blocks, because you have the entire graph, just not the content. You don't know which peer has which blocks, but that's less relevant than knowing what blocks you need. Recursive fetching strategies that hone in on outstanding blocks become a thing, which should cut down on complex selector construction & fulfillment, and parallelize across peers better. + +--- +#### (2018-09-25T20:38:31Z) mikeal: +> Datasets like ImageNet have 10^6 entries (image files) in a single directory. + +> manifest representing 4043 nodes and 1.024210Gb of content is 253.921997kb as CBOR +> where the first three tiers are small "link-only nodes" and the bottom ~3k nodes are all 256kb blocks + +Any solution we go with here is going to be more optimized for one case vs another. That said, I don't think that we should be using block sizes as optimal as 250K as our go-to use case. Optimal file chunking for large binary files like media would be based on keyframe windows and with text files we probably want to use a rabin chunker for better updates, which will result in many blocks of a much smaller size. + +I think that we need a better idea of what use cases we're trying to optimize for. I can't think of a use case for large structured data where a manifest is not prohibitively expensive. As a general rule, the more structured data is the larger the indexes are, and a manifest is effectively an index. + +> This conversation is happening over the network. Network is expensive. + +Couple notes here. Whether or not a peer has the full graph is a single bit, we could just stick it in the DHT and let the client use it when prioritizing peer selection. + +Being that network is expensive, I don't see why we'd want clients to pull down the entire manifest when they may only want a portion of the graph. + +--- +#### (2018-09-25T20:45:53Z) ajbouh: +You make good points about needing to outline the use cases we're targeting. Let me ask some silly questions: + +Without a manifest of some kind, how will someone know what entries they want? + +Are we assuming that IPFS should always rely on out of band coordination for distribution of CIDs? + +This out of band bit seems like the implicit assumption in most of IPFS's design. I believe it is a source of many surprising (and disappointing) performance characteristics. + +--- +#### (2018-09-25T21:32:12Z) mikeal: +Reading through this again and I'm starting to see some big holes in this approach. + +1. How does the client parallelize grabbing the graph from multiple peers? If it happens to start requesting a graph from a slow peer it has no way to start grabbing other parts from other peers. + +2. When using selectors, what guarantee do we have that the peer sent us the correct blocks? +* If I ask a peer for `CID-1` + `/one/two/three` and it starts returning blocks starting at `three` it could literally return me anything it wanted to and I'd have no way to know it was wrong. **UPDATED:** I just saw `Returns the object referenced by d (single object) at the path /a/b/c below H, as well as the merkle proof to H.` in the path selector spec which should resolve this particular point. +* When requesting sub paths, the peer MUST return the intermediary blocks first so that we can parse them to verify the next blocks are correct, otherwise this is an open flooding attack on the block store. +3. With this approach, the "server" has no way of knowing which blocks the client already has. So in actual "sync" cases we're going to be requesting a ton of blocks we already have. + +I don't quite see how we're going to securely and efficiently put this much logic on the "server" side of the transaction. It's a nice idea in theory to just have one end of the connection start sending blocks without the need for another request but this opts us out of any opportunity to **not** send blocks one side already has and the client can't really be responsible for parallelizing across multiple peers if it isn't responsible for the traversal of the graph. + +Similarly, I don't see how a client could make use of a manifest. There's no guarantee that the peer isn't lying about the manifest, although you could detect inconsistencies as you parse the blocks and go from there. Other peers could make use of a *client's* manifest when sending blocks back, but this still isn't sufficient because the client's block store can contain several trees and it could have a sub-tree but be missing the link between that sub-tree and the root of this particular tree, so it wouldn't have appeared in the manifest for that root but was probably in another. This is going to happen a lot in static site deployments, people have lots of similar shared assets across sites and there are changes to those assets in subtrees all the time. + +--- +#### (2018-09-25T21:36:55Z) mikeal: +One more thing, can we assume fully duplexed connections are available? + +If so, there are ways that we can optimize performance by concurrently asking for blocks rather than trying to come up with ways for one end to send many requested blocks serially. + +--- +#### (2018-09-26T00:54:01Z) b5: +Ok, might be worth backing up to make this a little clearer with a story. First, the selector conversation is separate from graph manifests. For the sake of argument, let's put selectors aside for a second and walk through an example of how this _might_ work. + +First, I add some content to IPFS, which generates the classic DAG and CID `A`. The content I've added totals 15MB, which is larger than the threshold for creating a manifest (for example, 10MB), so I create one right here & now, before anyone asks for it. The manifest for a 15MB file clocks in at ~4.4kb. For the sake of using the file system I already have, I add this 4.4Kb manifest back to my local IPFS repo and get the CID `Amanifest`. + +Later on peer Sandra comes along and asks me for the content at CID `A`. I respond with the first block `A`, and because I'm nice , I _also_ populate a field in that message `manifest` with `Amanifest`. + +Sandra's been asking a few others for hash of `A`, and isn't getting back and messages with the `manifest` field populated so she can't really trust me. She downloads all of the blocks at CID `A` the old fashioned way, but because Sandra is nice, once she has the full, verified DAG, she calculates the manifest of `A`, gets CID `Amanifest` as the result, and knows I'm a bro. If she'd gotten any answer other than `Amanifest` she would have put me in the burn book. + +Em then connects and asks _both_ me and Sandra for CID A. This time we both populate the _manifest_ field with `Amanifest`, and because this trivial network is set to trust content seen by two peers, Em asks Sandra for the content at `Amanifest`, and Sandra sends over a 4.5Kb manifest, which Em decodes & runs against the checksum. The manifest passes the integrity check, and Em's satisfied with 2 peers saying the same thing, so Em uses the manifest. + +At this point Em has a _complete_, list of _every_ block in CID `A`, the graph, and the size of each block. Em uses this info to do smart things. + +Before Em does anything else em does a set intersection between their local blocks and the blocks listed in the manifest. Turns out Em already has 15 of the 70 blocks listed in the manifest, so they can skip asking for those. + +Em wants the whole DAG, so they do the easy thing & just cut the remaining list of 55 blocks in half, asking me for one half and Sandra for the other. Sandra's quicker than me and finishes her list first, so Em cuts my remaining list in half and gives the other list of blocks to Sandra again to fulfill, letting my weak-sauce tethered 3G connection close out the 4 blocks I can contribute. + +While this is happening Em is seeing a progress bar, because they know exactly how many blocks are left, which they have, and which they need. One day in future versions of IPFS Em might use that information to construct fancy selectors that carve up the manifest, asking for a subgraph of available content. If the manifest came back with, say a larger size than Em's allowed repo, Em may elect to abort the process entirely before asking for more blocks. + +While blocks are transmitting Em is doing the usual checking of the blocks coming over the wire. If at any point the blocks Em's requesting aren't adding up to correct hashes, the whole process can be aborted. In this example em's local 15 blocks happen to be a subgraph that adds up to a file `index.html`, which they already have from another DAG. Em could run quick integrity check on this subgraph, and if it works out, this manifest is even more trustworthy. + +Peers are incentivized to not lie about manifests because If a peer _ever_ transmits a malicious manifest and you acquire the real manifest, you know they're misbehaving, because there's a deterministic algorithm connecting the content to the manifest. Because you can generate the manifest locally once you have the full DAG, you can check for malicious responses after the fact. + +Ideally, all of this is pretty low level, and structured as an opt-in speed-up-happy-path, falling back to the way things work today (because it works!). + +Finally, it's worth pointing out this approach is chunking-strategy agnostic. Graph manifests will work on any DAG. + +To me, selectors enter the conversation _after_ manifests. Manifests by _no means_ answer all the questions you would want to ask of a DAG, but a manifest makes constructing those selector queries simpler and faster. As @vmx mentioned something akin to manifests would be something graph sync builds upon. + +I think @ajbouh hit the nail on the head with this: + +> Without a manifest of some kind, how will someone know what entries they want? + +I'd be happy to outline how I plan to use graph manifests out in IPFS userland, but would rather avoid clogging all y'all's inboxes if we don't have clarity on the concept 😄. + +--- +#### (2018-09-26T04:04:21Z) whyrusleeping: +@b5 Hrm... I'm still not seeing how much the manifest improves on the situation. For the 15MB file example, you end up with a 1 deep graph, where the root node has links to all the leaf nodes. So the root 'A' of that file contains all the information that the manifest would. + +Then, at some point the graph gets too big for the manifest file to be represented as a single object, so you would have to shard it. This runs into the same issue as before... + +If I could have a selector that said "Give me all non-leaf nodes in graph A" it would not be too much more data than the proposed manifest, and actually contain data that we need for the graph. + +--- +#### (2018-09-27T09:44:53Z) mib-kd743naq: +@b5 looks interesting, though I can't dig into it in-depth. Could you try to build a manifest over [one of my datasets](https://ipfs.io/ipfs/zdjA8qkDL6PaMtsVc4mVC32dywWagCynuK3JBVDo6EuuKAGDV), see how that behaves? ( yes, I still need to clean up the go-ipfs patch to render the metadata locked in this set, $real-world is really messing with my available time ) + +--- +#### (2018-10-22T06:39:55Z) daviddias: +Good to share here a video that just got uploaded, [Volker's talk on GraphSync from LabDay](https://youtu.be/tpqXUmokFZ0). + +--- +#### (2018-10-29T11:23:03Z) daviddias: +@jbenet and @whyrusleeping produced a specification for GraphSync and IPLD Selectors during the Go IPFS Hack Week. It contains all the thinking for these two systems from the last 3 years + thinking about this (first record was [Jeromy's Bitswap Talk, circa Dec 2015](https://youtu.be/9UjqJTCg_h4?t=1478)). + +You can watch Juan's presentation on the [GraphSync and IPLD Selectors Spec here](https://drive.google.com/open?id=1NbbVxZQFKXwW6mdodxgTaftsI8eID-c1) + +--- +#### (2018-10-29T11:29:38Z) daviddias: +@jbenet can you provide the docs produced ASAP? I believe that @vmx and @mikeal are still working on the direction that came out of their recent discussions vs leveraging the spec you produced. + +@vmx @mikeal one of the valuable outputs of the discussions in Glasgow, is that independently of who is right when it comes to GraphSync design, any GraphSync design and implementation will have to go through a series of tests/benchmarks with the multiple graph topological. Can you list those here? AFAIK we at least have: + +- Long linked lists (e.g. Blockchains, CRDT logs, etc) +- Gigantic files +- Very large sharded directories +- Hibrids between sharded directories and gigantic files (e.g. npm, wikipedia, etc) +- Fast Video Streaming + +@hannahhoward I believe you are working on benchmarks for a potential GraphSync for go-ipfs, do you have a list of topologies you are about to test for? + +--- +#### (2018-10-29T13:10:03Z) warpfork: +That latest set of docs for IPLD Selectors should also be linked to on https://github.com/ipfs/notes/issues/272 :) + +--- +#### (2018-10-30T23:06:41Z) mikeal: +> That latest set of docs for IPLD Selectors should also be linked to on ipfs/notes#272 :) + +Looking at Juan's screen in his talk and nothing in or linked to on this page matches what is up on his screen :( + +--- +#### (2018-10-30T23:11:08Z) mikeal: +> That latest set of docs for IPLD Selectors should also be linked to on ipfs/notes#272 :) + +This is a good starter list. Once we have the benchmarks somewhere we can always add data sets, I'd rather just get a few of these going and iterate than try to front-load a ton of work when we're currently operating with zero benchmarks. + +The much harder part of this will be multiplying the data sets with peer/network conditions. For each of these data sets we need to benchmark situations in which: + +* Only one peer has all the data. +* Two or more peers have all of the data but variable network conditions. +* Two or more peers have parts of the data but none have the whole set. + +The issue with the old design wasn't so much that it didn't work well under a specific data-set but that it completely broke down once you were getting the set from multiple peers. + +--- +#### (2018-10-30T23:41:45Z) whyrusleeping: +@mikeal you might be interested in the tests i wrote in go-bitswap recently: https://github.com/ipfs/go-bitswap/pull/8/files + +--- +#### (2018-10-31T01:35:59Z) jbenet: +Hey folks, sorry for delay. I’ll put the docs we made in Glasgow up in the next day + +--- +#### (2018-11-01T20:48:43Z) jbenet: +Here's the selectors part: +- https://github.com/ipld/specs/pull/74 +- https://github.com/ipld/specs/blob/920f671fe388cc401caf32234d2de98eed0cb9b7/selectors/selectors.md + +--- +#### (2018-11-01T21:10:47Z) jbenet: +- I renamed this PR as "GraphSync (A)". +- I PRed up my doc into https://github.com/ipld/specs/pull/75 ("Graphsync (B)") +- See it here: https://github.com/ipld/specs/blob/bd841ab2b974f01eee07ed44e31cacdc56e13540/graphsync/graphsync.md +- See the video presentation here: https://drive.google.com/file/d/1NbbVxZQFKXwW6mdodxgTaftsI8eID-c1/view + +--- +#### (2018-11-01T21:39:41Z) jbenet: +Other notes about the manifests approach discussed here: +- this is related to another important problem when working with large graphs & selectors: being able to check membership of an object in a graph or selection of a graph quickly (locally and in a trusted setting). Especially relevant to membership in a union of hundreds/thousands of selectors (eg the pinning or GC use case) +- A good, fast, efficient implementation of a local selector based pinner would need some way of traversing the paths and structure of the graph, without having to pull out the data. +- I think some graph dbs (probably non-linked data ones) make this kind of "traversal of the links" fast, and also avoid pulling out the data in a node, but not sure. +- I think @whyrusleeping is right that for most graphs, the structure (links and paths) will be about as big as "the whole graph minus leaf/terminal nodes"). So the entire approach may not be winning much. It would be good to test/benchmark this assumption with many kinds of real workloads. + +--- + +And thoughts on provable versions of these. **(not relevant for the short term -- <1yr)** +- Transparent proofs that the structure/manifest of a graph corresponds to a graph would be useful. (basically, a way to prove `structure_of_a = structure(graph_a)` where `structure(.)` pulls out the paths/links and nothing else). +- today we use simple merkle proofs because they are the cheapest way in combined "computation + bandwidth" costs. but these can be expensive in bandwidth, so in extremely bandwidth constrained settings (eg filecoin/other blockchains), we use SNARKs which are massively intensive on computation, but drastically reduce bandwidth usage. +- I think proofs of this sort would be too exorbitantly expensive for anything outside of a blockchain (merkle membership proofs + all the path checking insanity. not reasonable in either SNARKs, STARKs, or bulletproofs). +- BUT. since authenticated data structure checking is so tremendously useful, this may hit hardware in not a long time. It is not insane to think we might see chips dedicated to doing proofs like these in hardware in 10 years, especially if enough value is on the line (eg bitcoin asics, SGX, TEEs, light blockchain clients, etc). crazier stuff is happening in hardware for less value on the line. +- following that line of thinking -- in the medium term (1-2yrs) an RFP for "figure out hardware friendly ways to prove membership in a markle dag w/ string paths in the links" might yield results that could be used for the long term (5-10yrs). + +--- +#### (2018-11-01T22:21:48Z) b5: +Thanks for the info. This is super helpful! The whole reason for making a stink comes from pain points we've uncovered building user experiences on top of IPFS: + +**I want to show our users meaningful progress bars when fetching a DAG.** + +That's it. It's a small point, but an extremely crucial one. Unless I'm missing something, IPFS peers lack the info needed to show how many blocks _remain_, and that they're arriving in _parallel_. Not being able to show "bittorrent style" progress bars means we can't build UI that shows users one of the greatest upsides of block-based content addressing: when performing a fetch, there's a chance your node already has some/many of the blocks you need. If you happen to be building, say, a version control system, there's a _very_ high chance you have lots of the necessary blocks already. Nothing else I've seen has this property. It's the detail that made me pick IPFS over dat, and I really want to show it off to the world in a way I think they'll immediately understand. + +It's absolutely true that most (all?) manifests would be pretty close to the size of "the whole graph minus leaf nodes". The entire manifest is a tax. The advantage of a manifest is not in the size, but in getting a fetching peer out of an information-poor context as soon as possible. The tax should be covered by being able to make smarter choices with that knowledge. + +Anyway, I'm just after progress bars. Building this sort of thing in userland is, well, tough. + +-- -- +As for Provable versions of manifests, that's well above my pay grade I'll happily leave that to y'all 😉. + +--- +#### (2018-11-01T22:32:30Z) whyrusleeping: +@b5 I think we can solve the progress bars problem (especially in your ipld usecase) by adding a small amount of extra metadata in each node that lets us know roughly how many nodes are behind each link. You should actually be able to do this today by simply adding that to your existing datastructures. + +Does that seem reasonable? (also, we should open a new issue for 'progress bars on ipld' or something) + +--- +#### (2018-11-01T22:35:55Z) b5: +Totally. Apologies all (particularly @vmx), I've hijacked this thread for long enough. I'd be happy to kill the manifest discussion and move the progress-bar chat ~someone~ somewhere else. + +Thanks all! + +--- +#### (2018-11-03T10:16:33Z) daviddias: +Here is a playground for you -- https://github.com/ipfs/interop/pull/44#issuecomment-435576034. Customizable exchange files tests between JS and Go (go<->go, go<->js, js<->js) that test for large files (as large as you want) and directories (as nested as you want). It is pretty easy to try it out with different bundles of go-ipfs and js-ipfs, check the Readme https://github.com/ipfs/interop#run-the-tests + + +[PR-66]: https://github.com/ipld/specs/pull/66 diff --git a/design/history/exploration-reports/2018.07-graphsync-a/MTE1NDM5MC80NTg0MTc5OS1lMTFjZGQ4MC1iY2U4LTExZTgtOWZkOS01NzI4NDRhYWJmNTkucG5n.png b/design/history/exploration-reports/2018.07-graphsync-a/MTE1NDM5MC80NTg0MTc5OS1lMTFjZGQ4MC1iY2U4LTExZTgtOWZkOS01NzI4NDRhYWJmNTkucG5n.png new file mode 100644 index 0000000000000000000000000000000000000000..2fa3552e4f91238a32692dfbb2893ff87e5a71b7 GIT binary patch literal 29119 zcmeFZXHZjJ*fzQWDS`-(R6&Xg3epAXO{}5!-UJDRDqx6olm{D91nC4U6zL`O&_n@& zPz=ogA=0HK^Z+4nHqUpyne*fPIe*T~H}me9%-Ut`z4lsrt-D;;edmp_;R9A?K4t&_ zSRoJZngRfw@ad0<0X#GLguw$W&OUi)69@n-7fydPSbW4000;w+ySL4sWv}54JVd>6 zmdJ#*l^}e%odG=OGarI{OE5*6*PBA**8l+f^+t}w=XObX~8#*(hP%*odtJ*jVJ6;r{!N(8YZxa zn*N`|{~3+{nGF9Q)g*#5LyWL6(vXMIAD6wHfQFrI8mC>Kn4Na1mK^ z2^4Srwu~~1wN=%{Z$bU;(sthU0m#fj-yfs{^@6#Z1#b1k-(Fi%PKvf`k=g1n+Z8Nf z9~qZifm2)=sQ6y3@D%TN#Jb*nq`MR{5iy5guQ!1UPW7F1_?M+L*Jp*l@`@pkVb+u> zN5My-p%ZSbP?;S5B^)BMLOtNCtQVsH}!q_GX6PD;sDW%`;%?e5i(S)wE{sYa>? zwP1_g1-pNdJeOp7y|QIa=g?>_AfqM1c3;1>HBBJVsrhexYG7c{clFM^>=9**PR_x z_rEurDI2YxiHrI6Xr^1%61W1>njCA1UEIIgD%omP8wv5U$uYM~!m+h(&K4E3_thoe zsK6`cE>|1&xW}WOk`}v#5;Q8D`?dm233g^Vd9*cVISOm2QG_Bff^VSWc4~Lpmo+Pd z%pO~c#hO1q+U7XL@%M*p#^8I6pPrb&NlL~+_Qhv4XDMkmwr2Gyqx+NUsWn;yjX!R- z#lGZ;LP%?8^k~9LJwZuihBrFIh9a=fFYmk{q8bHQQ`_nv5#f zG;n)3$J!Eq@IE&(<#tYWELHKqCT%dex6Qw|-vrx@CEVOTgXz0@|5J6QCI`!+q7=QV<&tr-pyoV;mOe)Hvd zwcOz>PLPlpq7f(a%j&1=UB~fRQs=Lh14`U8%!^>|Ynj*0_6MMDb~B;DW{81q64`ApV^%S5ar@(* z5*Njr)UB13mE_y)Jl{sCt_sg$=E0c82Do63ev6^YO*(gfUivs?ufwJZnC#CuemDyM zwDCQaS*urB+e@i6C*r7NlTbJUlb&Aa3)##iQ# z2vUsm!kqor!Z{BM|M)AGXNs^Yu+8^11EtJukfolyHq`qcq{|LZTf7=PWRkb?`w$Jb zyrnodev3M;CWeRDLW}Nyq7QbaVVATUd$fNt&Rmo3xmWjLj4MX<8ggP z7Cwzx&FrWv>1~3fI+#^>F~3#1(1%4j>WvE+C<#62Pgr7J(zap0X|?dr!eUYuoCDJ= zc>~5hQW<+6mL`l}R}8QY^BJK+DVjbg4OaA1AH&%O)LiK$x}=5gZa4xLh0m+`PJe(y zEk21_nm<^B$?dP+?b}02#nFjB_?f(3J!meOAMS#VaEHVAXO4>b&ONK3m}OC+lO=1a z7FEYxVoSIC{qysy)ZX@XM>KYjnz#MZ-279E18u+i=1@de9ld_<-Vkb&^CTCA&Rs(J z)T=Vya&#DOER{G|`|Xx?VeCyB*W%LxQF#MsIo9@b#@^4!AiSOJ$@IT;U*;cJG{Nf&zdzceat{t?^uMqOG*n2<`jVHB-xkB27_UABdq}-M96gBr z)6OD<>U6ma%Ow-T8YtI^$E-1EMtDvGx)BX~7N+DlJ+90Ow1Q&OfBD4AfUXPMTD?$Y z6k+U&i`YIMbYR#wqK5Wwop7*HUK*_wb+|^lC(qB6kHt!AI50+A=4^{%RSyIf%9n1Z zAd)oWPApm1N;$V@Qu25>9B`!n32uiBT`TV7ZF`~*t{MBrB^EI-9>9{O5y^e5~K zs9Ufp;R>Px0^j#HorOAbrzjbv!oTUVN$VLDmhe@GR~5(VuHOFZPd%D3MhJejR8F0h zvs`cUYN}4Lbpq-7ABbpkru;2ISE&_#KW>l-XFr&pG1(U z!X3q~Mh@pQ1@ATxp4$88*0c2&D=(d&3u442Y1KB3(*3mN_9{9+roq2uJ`_ksqHB%p ztX|#z8>4_I^SUp-8X|=*?ky=_eY{$5b7lI^hR%V7Wp|#hbvMpyZluj8-iO%a9B1Ym zA0k+}gTkoOY*(9ern}KR%)C9Pz%7jSZ;5lct-ca5x4NuT(DzRF;NEtv4!eI&%i<>K zb}ZGZ|Ja-_-8^rua^rC7jIiiI%97pj33hJH_H}$xkVS$POkO^{kQ6*ZD(#GzzzkWY zR@6=|S3BX$ehf#vrKnnV`k5i!f^54SHi9-4GP)I8mBM-sw*6*2Td|L9oK`bzhk&U9AG(O8Cc#5%G zD|{-_Z315Z;zr%j2-i;-1Xb~F2U+{uC<~aco<%Lm5&Cixr(tc(Z9UZemV5{gc5kvy zcnrzt#={72%c@ zzDG2H#&K`c*%Sxl%D+tncp7{*=dhhs(B|=ICW`P-Fex)YG3qIzOuBl)h8L>W8tEa1=oJOR))!zFBfH;vLDSx1DJ7N zzxjIzuX+uxg0o83f3yhK*KshDj@O#}Ai~Ds`=LzUnCXJb15w8dGPkP zQjW?fU&enIVqRvXXx<99R6i%VdQ@W%wCm@q&R;8W^~IF#AKQN{7hjsO;jcBa_3Kq5 zA?l<*TAdNbBgM{}Dx&yfGW@n)6`42#!2Oh!^x@KY8;NE%fF(}a`s!Yx7L6+wcRxhtKgel^7;%(5nV^US9u12bTZG_?JU3 zGBj?YG~`3oS8Krcmb^*J{_%OnI;1IS#~PcFA&`Oll5q{hgq&}O%dVaQwtF!-5LII) z&74L`Mn+lQIL3Pr&jwA?0ouRL#^u3dvh)%}^ZE6x{DUQyF?hnDaB#b+JR%p)7u9$n z`rXwSW>_~qL3hl?is`N(WFml!YQ-`FyvPh6$VAeP0ECe8^xATZPWxrn==Zw~140i& z94vVNUJ$xr??gWeO*;01v|V5RxIAE0`tv;JjdCeOTc+;~8sN)orTF;S+)(D*_slMy zJiKg;Ghs3fYM43mEN<@Cr#3R7z{f-oZ;5ED^NP1l+Hg%hV!JOw6q6M})G4UcT zsWnSP31Y-Z^}9$Ck&)Q?nZ3DLQNC%K*7Y;}8MX-#v}x^f6FuFqSp8?N z>>7L6!z6T@#ia&8I}6}l6mW)3Hzem(-XJI1-XP?f1VT$fVZ1HA>?#m-dytIa|J6C*5dKk?yTYgM!O~65pza9{ zH}#>5G)zMS<$2HZ)@`nz0gR;#6Cy_auOzb{)EJ6<>L5)yk#rX9Kf-Y!{@XxaZx_zbXUoW0?i@4DLE;CA#rk1%`DSUVKFn^oQ} zxyIyK1_0I=mS@eS{PMWq7Iw*f%c1tt#(8i^Jd&*L@@x`r$xB#T15r3TlV`*ScF8y9 z1~~BY+^ zG~cPaC?Ny1yJSATcwz}Q?*;zeDDxco?qTaPkT>*))7O}R$|quT&v91(nqHq3-+Lbz zZY&~xD55pf*#TLeYYtr%Vo$%*4t1PjA+m*AYEClDq%xRwu#a8{&gwG&)RvF;aFYx-~Ikkto6Jkms0%GdU-$Dvx>qt-HP~ zALHvNaYhJ`RjZO1YwCEGIG+4BK85~P!dxBAf2%78O$uN4a-ul3?bTLLCzGaw1cWXy zUT#g<5z;%XLz8~IWdmgIR7oCFgu-`zwN)6mRwOi)?W-qB#B4<%0|a>mXqW_>6EqM1 zI#G8DwW#RGU8~4lGAGfaOv2pU4xTHIg3GI!rN8iB0?%7TE{{i2$4`2mny~;LeHg*W z{h3!M-Q#V?yC;^PN+W!r?iPb*i+*>FF$#rkoTTGUB$vrk;jw?{c-dUFH$H-|@Ze-+ zN$cc*aB#Fe7%8vWuA9UuQAihqc2J8UTXNp0c27D#6)w|y>>pe+UMZM|85tA&}plk z5q_4~WPZgawmVXK8O@BcLyndjvC{(b*lYW|jJKI&HNj7-{KZlby4SFNc;+%N)COY| zItyM91TTn%Tk@Tr%WZv;@UQS0UICQz%(uVj>L+Bfl9CePxW_1BOG)c^e@-iV?-=;= zX+>>6F;D1ZzxxE2*1D4)Nu7BmbZm0Ti1wo>_%tR|li5A6?crOof399;qPc$uc#*HP z3JrL{4y4E1Fq}4f+Q-z1(&*FXfNNfY|F!$~C2&U1%kjT`;F@dBe>)xt;7Qo@|2o+v zpYZL04m)klcZFta)os48`Si~`?RT7^tIS6HR(m8n2x+d(op{{(eQ!~5|7V;TTW7nd z22w-H3bz&HxT^S3%2r!84YT$>V?6^mY5lymn(5oCyLwDf3vSR==-vdz9{W7+!Dd-> zLvupK`opF-tG>0ri8^P5Ov$P2@^)|F(NWc~nWm@t4`UsC(Kh zOpAu4ulG2KJK5sqrtV2WQ`F*>z}O36^Y>wARzUq9HEa^_enL_K8}C2RU#_B`)Bm*K52(}}Cu3D|dTXx~-P&Pg zA@|&4z}eIVhx#()sqt%*5xVy`noV;*vVYB}b$>RNkUd}LzRVN3$#e22R;G2|Xo}jF z{dnX3gEyp$45c+e6^BC3IIIb=L(TDs?*aDtz&Z-mHlTjEZs z!Eo4GUFGMO{x<91rpMHJ*XCtAzVkmq(XZ#OxHt#p8T&T}kyE6f z+b9|?PTKwn11e>~0dsV)5;`RcWNZfd*M4K_R)g5-vZNq-a?AH6ldTl8H@$3wM9L)1 z{=8fOS8`Eo3^7IL8>P&jLEN&3VRrrDggh#S_gz{J;`7WUl7# z#bL#GlTygUho>IUS8$20vygIyFzK!cGaQg)jnt$3J8cg5 z`mE&DQfmG=347mCdqdoh`V7R5JnOdgAJu(Ld4EBjaY8UU!$(*3gThzqh5J}NL?q57 z*Q)qgIkT)+P1HvjD+7slllHkkV%HjTXheeQRgD|?6>Bf*&C=_5o^7)I*YYwDNU9(4 zfA<{ShDlvY%5nGeQw-ay&3nM&7@=X-@Q-;9W=urDi#Mz}dFQ4yY1WGlLXleiK5iSI zdR6i=ExY84vHo3_$MG=80DR*%&(r3n788eeDO)DpH+zPLFtegMzBEd*9Ky zbEr(ou7clU^a-Z!_2NO~jmQ7s?kBG0YpvwsFb;%dNY0}-|Jnas#hI)1{cCyUjS)7T zt!K_nux}j?;;GD*t_+<{9r8*R=~5^&Uyy$*Nv0ds)`VA{xYU%+E@IDQbbrgUFdLDubzaJpXSaTzq|(+vKTK};->@jE6+be+dh zQ0&YVe&*-6?N_|%G=K5du00rUVVF{AkwN3GW$YyKZ&jX_ueJMCN~GKB{mDO9w+o?r z>5LpM(?j$uERX(OJj^$_5N0n{!`_137?>%}8&;Ih1KN4VZgudY!|ZcAP+6uWd-wW? zcVo-NJ^JVpA=3|=E;#tfy<*a?WWVoG?!>&=u&ov|h^Imn6Sp~HXx=WjAjpceK=Q2~ z+)mT9I|a5lL>K4p8| z7`IZurKV}Nr0gJH!lyl~r~MwPW(>7PMlVG<3;v>KiCz5U;8DoD`~n&b1Uj3dOyB|) zb>%9Nt!63v-o-+<)VoD0CU4|!{u6qp<^q{q=rhiEco&rJT>ZOY5C|#UJeidH^Y9z)<;$}B=*?hZgXfcm_EC*@cqAp?zEW-RE_NPk zQfQ%T=M;cU@Xu|NIJ=b95o9B~(zZh1Zv#{QUpjsTkBZ+0^N2KrNk^R>?@`&5dP7a$Npi+Xh73$jf}JOKT*z(aZ~nfj|~+t zvElM)alyzC<4+mkE5zhX6?$OB8xjH|phEA`b4_sG#(wk)q+(xlnP~pr7{UzndlNA6{cE=0frUh0Ti}qr zFeto?{a^i68xF>#>2ZfrnUC&C5p9p|m%i`h{z9-|Vt?LvkA>;58e7vLlDW!4L1+Hi zgLdJaxCOf!xYMTMI-;4EU($mHeXASPkBc^@4B|V9iRWx`EHer~qZWyN{cH82$PC<} z2;*4y$Hm$_g5w4aJnvz2=BMIBU1!`68sW_Vh)-hd`eC}(s69Aeiuv|B6|6gOeQq;u_Nov81UKK8sr zMJ4Fk%EGG-!bFb`6wf_ z%IaCKgAzdfGN{0&s4Zwe@u^W;5Rk};o4k_5)tW1AY0qK`ki=@R&9_jXlOw_aeE`ZI zleKzQz*QdA?R3yyjnDLeIn9^Y+^>;vco^rtt{CHZ`!WXeIi-fs= z(S=2O$w@M26JLec8E?Jem4pR%qUNf?ADXv3_!Inp2JM_x#DdlFOfD6dOHH`yJEuL< zQ*Ae#`Nl2NMHHNGZ`KTMXunRe4q+Pgih&;FR}9hYeE(m5MWh1nx#M4K5gXBwIW5Nz zFSFy`?>zi;6kI|v0-Szry!^mP{3E135^$sLcW@F~dBdj0*&IhWR}%WRxm zr453zWN_d)+V;HK_Q<}zzQ>G^lclBH$m9BvnJX&XBaF;3n`1XQTQ?2I?!Ji8F-D;< zcwS&fCY5kf!yCrgdHdHYGf%%nOg%y3jHr8`@k=A!bANJRzi*m;c+oh{cFX#EfG}{I zWV*0>u#|gZwfWpq0%Y@6nahBDGwC`9KaklgP6WLbfg!m{X?x?Qp&IP$|j8PDY4+PLlyboHnA={hXLMJn9k^A4{Km%%4>+34F zIq#%@-q4LF`HW-8ul>9{AY}A2S3c&bvHm>puEghn74R9Nec>2(wAmfG4}SD{zlhCF zo)cfQ1uoJwXy%%e#OCy4-E6ekA>E}V0 zzyh@BPP>B}jMzgtwKg^$C`K>dH{(4C$Y)RAXcq^}?i{c>+k#)^7Ig1FC-2l9JRyG( zqIyL4dWHFWV_ak0LlSd=f-5Nmr{k@(iirL1LdOEV{6O+u&}foHrF?VVhz#Eaox@~O zznxN<=P9em-Itx=zcke|K|v5oO2*NR}uIke}-q`-?1p>{pnQbH*jrd|AWYR>xn` zCNOCmi)+sNI(!OHE3qa0_oa{f+K&28u4xLSsm#(i>tfcbbT*z8{3E2vN0+Vvu{-tD zPxer`)HSVqu%$vaC=!H#ey8m}#D+pDL|Z_oCB-~QDDr4i=(yAAsI!a5DeU$FJz#c< za~#Ly5d;hKRpRMzawsk8WSW`6 zUboZ9;kXuME3NJ5b>SRs`_s;9a+!|A&M(fo6KgD4Bs?n%0vu|y{G{U{g*b@6+uct^ zlg#Nki4hCL)ug)PqrsD-ad1(#R>&?~1)M|TIicmM#dbZI=Eo|xP!96^=Hb!|0w>0w zcKLyVcwuX)pHJSDX(lJ2Pvt$n=>|DVZRKBESNJc~h zPR`04%oSIccw7;vL-Uc&R$DM#d0_6L|C2^^#k&ijt*@wgH)*;n>I@*O5?cJ1+inh| z-jW=3LwKC*lAlhwIH?H?GXlteqSV=;dtwYsMUkXl5n@0iB@{t0O;3mbwZp741T?l> z^lI<(lKp3iVC+B)I`T7n;zR7q3?K7wvFUeBF50S5u&Y2Y;G(@@`6#=)34}aD{xqO- zJ2x+fCY{HuNuKvU1H4i*LH7Bzq)sj2^3;UTts!()nl;Uwx6=kVVBMT{M^OUmL<$P7=lH%4Jio z0`HcT?hAaenl(78ucH?+xOSoNm_Diq6k5p^kjbQh)x-02z_sfPfL7)&BEeJn8xx>E zb&3Nnj>Yc#F>5m@_J+W0(f703e&G z%zKp9zxJzP@+RsIYacPW3I2s7c19bJ<$C%}M}m0Vg%OFtRZk(Ipr9$&vh<)=S>10m zkk=bt_dWFxU^mcn+gTAGpxUDg0Skvh~{@}L|@KlFK z5&-eo_jzS<^_%w}S$;2+a+jUL&I?8`0r{ZnmpG8u{u@CL3@@D;2)&rPDeBkGL&bq( zjJE^!iz0vjF;BG)#P*-`*7&ZI(2_X@KN8Vg(m8 zFVoZ685-&zc}5tVy3*ESZcy}Q+cM;D($g(%tYtRo)agg=AIu-y94)HY-XOn}5Kf+cE zaAt6MQYpPG3{T@r1IdQN9Gwk+g*QI5+Wa+9>X#%8B!1w^x-#$OSGJX$^id6P-Y^?>j9NucGVpdrM@CYC>AOFuba+Qa1k>F3_`DQ$=fNRM( zVm{96V(nspJIqSama}aMoW2j&T0ea87cO_}Qd`%c*~z=RE&z>^!jj0+@gPHku8@C{yDUD3I)?P3LSMkl*?9=+U{K zbX@N05bEnTcPOJz_p2WfKirwWb?2ZfIKsdhA6aP=8A0ZzyNZusnq3Qmzr|xps0aSQ zoQ=JpJp3wHrAZigK1J=t=Zc?uJarSbGgM;dKYdWLzvPM_xRx@W)^It;M0_)M=R7Ba zjNp%j7En2(aQWxJE3xswnf%#jiJ%Cd6>kJjq_fkqg9Xs?=p2Yg|Nj-9>kLcBIgi1B zD9BCJs7>J`>?wVx_YG4g>M)=Rt~qgN-L>Ta+>6tfYww6n*>26_K^cNG`7PS%Vbl8> zjnx|cq((~VFaWe$W7e#(#b_<@&_TS`dPB=C2!E2LXiM_WHAkjAvXV#{muAom(XSY5 zmHM&)Ig9L9A|99Pg=GCGhryhb8kpZ z!5T|Rj*XPQHR>&h5uSQYt-uu$KrZBhO^i`V%h!mjSMnj3pM~3|kbL5$?EI|0lXj4K zJe|!%aDn2f7PwXBC2C7MdbzpcKzb~1l7%oEiss5v1LLkX)l|%t$3>Q}v?65IxnRZm zfmwltSK?9#R*%%k1L&8#cKL;I&(1F{^&`V#7bY{&k=UTDz(Kr?_36ePWYW<76p|%M z^7%m?v-Op57-`D$ijj5@Ok4MEp|0(QsSuMGAcb6$IH}S=$1R*IBuy$FrY5N8W6RQ= zEkoX_-DtFkVX2?pli`w%k0lJ-_AL2rE7{2ml0Li&ZoQ-!UN7e`?vMYT?mq5s)7RGL zA7A#wFuVJNl(y!jRxU5565ksZKdD~&tYvM=L!IQ>qW=GV$2_}FDLF6m=b?7wk!u-6n-DaiNr&ihAU zHqL=IN8UZE%P+`^)Xit6w!`$JNS4C$!EJmAGwNEzwJWPPLw)4Y%9l9DTKem~TU<1A z3P>t;g^Dc)F^-q}5314_09`aa^POF3hJ3oR^WI-?suH@%rE;FPtksU^c)Q8@RN zB~Nd7VkaUtu;{;jHKsAdIL{1_Evini+vSz`{`h4Zi7@E}O2kY9iT*to!K#E`JhoDn z;S5muS5YJd9`k#wl4bP%CN*hSiBnqA@!42y2Oeaq8VTz;$#q$J4wk-XxzYT~K?l5# zbM@XjC|T-_73tpbr@Ufz(DfY_N?3B6g05KQATF-BXdP2VP}0{oPh>>Py2AR&mt$D` zxEv#`HW6F8*mlg`od0OhdQlHyhdT9`BB4f$#40}U0>Jscq+GQ4!{RISpWsapRn+_x!v+A*?NEUNp2hbZ%ae8h%20alN&z zYz4}37NoX~Rt*T?wRrl>!xbZC2!>fZdChn$<=mWos{d7SUeFBm zl2?;dInF<;e&Mv&S4x_^8cU2}Z9^NgUy8UM8^oZ|&^k}(B(yFUA8bgOIt|hu;c~T@ zAwrv+eRS_k41j^5nKmo+5m%RoJ5;@xd4di;yM*Y)FFEwn`$mw}*WKkOBeIT)_3kuz z^pf8*oM|&{NHmE4k!72f#HpZO!E`RbB3Q}z9CxwpKO^hEn`#8hpsSG+v%FY$Q(Mir zNFl4W`u^ay5_XY77DB(|M);>k!WNgBBl)8q_aWix=Y%6aZ)o^DR2`(vzTtzN+M5zT zMsf~z-h?kXY_>=*1*qmMIo_<)@}1*|abAqmY31H7GdSewJvG4qz_r0j&Yj0hiO+HB z;mnnfqu%Nnp*&@Lj^@p8DF!61&6#rsKG;OR>khBE-Pa(i>OC@+bNX5o&sJgamcUH> z%;0dqj40}LtuY)CWcKNvc&#D6j<1ojo&Mm}TU3BsZV8%QpW-QBV&23b?!G82uV%4t z=vb`s%tO7dnd_rr(Cq`$nQWC*bJb9*Z$5mkCBFkZT6|$`6RCtD)il4|G4Pd5! zxI(_I^fiwfAndrUxSCo<2gG<}s&lh&dNENUc3fo^0ud#1IclAORr=#Vo=?cM$M10B z=e3vGM?P&yJ2!kw(*3;TsCpbxL$tJj^|KhY9txi$)082ycv0;TG;@UUj!(NU_@-PHzsmR!;|92+Qo4V-BAJosBg?`l)52oJOCwO;AO;Z9T4x`|B@ zF6<$>$+I)ucg2@nF{`N=2NSE)9?2JVcwOtG@^!*c^Z1VFDqN*5HWS`w+dM6HDRdfM zIDiZk4iX5>pHWGS&+k{MFF_CAXagxMF~>D1V!6*&+cx*==9Kqg&cw0h15HGAm4kuV zsv;5rr@(Xv17lp({p2(k?~}L7c+W|z+k*O+We{UICxE}j%9zxKnI>2^&^4L_^6cby z#0(Z?v=}!qS&wcImyxk!YqKYJXLLO8V6)D{JOXc9)f#3Ewwg8%`$CIquhoX-5nY0f zN1^Awm!%VAxSBpA-E#@A``)nS&Y37G?{{HxJe38nIQb>U-EQomMO70`EiC60x7(e~ zaGiB8Z-2WVv~!II0x|y z7B~Ro6|w4QDs)y9V=tYxJ%8W$`|d*Ijee#P^jqGu=Z+ zkcdty;^CkVxpn`zaFZG+A9}$FEv2Mz6kb3rv@F>gHxvp`mQ5RLm6fOBV1s|>Qz|xMIj3%{v-X#iOT|)33?kfd4Fyb#n6E-R!&P2$H2{D)+pO!u(cb~`Ox1_t4oK;wdg zNC>kkf?`{;{{V7ZaS4tOwwj83wNuJ99@`zHwi`5MnfX>ltNZPO!Q6HPx6cgZFAx>O zMY`!&@`MLYJLxdIL0rJ@pcyKat2c#nLt+aXW&RUpIBorhTCzfHhN}efd98pnw2_u^cK-%N>yL z1q*GzQ)BOc4*zE~{!gC_i_tdMfiH3>3kVoN5J*PI8(z>&$F<}Apvdz8z4?{#Uw!i6$nX78W2V5HYtFX#AMk%#!Y>1 zHozB3@!7gUMAUYMAjv7=1vhgs-csR_1~9P$Z&)km7>?M`-*xo5J7nTR+o^fc4Gpq? z*0Ph18qZ3-y_Zo;5_MA4#ik4_>-ooF%|im0 zwXiYME$x*Xc)VI&v8$k{OcVEzO-u-10)m~4zDStN-9g-fY8DC~$r9V(A$oOz0)tuA zK#2jnyf0vyLgwd7m4kBytRplA5Pcmt6JkS3m-P0+ml!am+b%lY<`xNQDJcG`^JUCe zOk12Jtr#~+HT~O3gLOq(RwdZJn}j|LhwLuhrOhAjuJ3vtowV9IW?OnoPysu_BN>RE z+lHK>U`?Yn7LVqsMLeG*>MbCa3jc}9R`k)50PN`B2yD-(+tt^wTLh8I`{fdncpM@b zZN53-0u+`>o+DucdrEr(gAHY&-BSxnQw#fjV9Fiofs{<<0Qr5J9d~N5J=9`Q!v5}( z27B0x?waC@wRbH)=pp-XrM~EL@q{gI9KIveXJxR~q%ATPC#|Q(AhP>?z)_caetV&2 zK9B+pDl`6?r-?)-ja|d#D(qRc)Ns#na1!-(u`%ejOmjYw9@Mv03#tYe@kw*?(=UlU zt|Yxre{zcV!xyNgq`2+ZR3o#tU$f#{ zk!YuPsJOEagSD!vT)8H5dk`%8y_ZI|(agKuS(yrT<*TSFC&A$$WL%n9NgMVMcFBY9 zx?;bon=$x$m-7$ut@J-|I7Ql6x9M9@dm0kxYinp;>xhP$Y>{I+?!}omY@lSBo`Jbi z(qWc!ZARh$;&`iBid;fWzl}mS0-KD#O3vR;VVfv!*l3c{K24O;L^61%t?PY{{Rq2p z-d92o5h8`;$fujt9rMc)3jRdVS%~?o&LAv4WDs3N5zL-a&2^eYInU)``3EPf z2^%#Z7N{Z;IGf1&_B>2!P6C|bJJr>fX<;|M>rcq?&$`asHi}?Yx_Vx(xBYuZRi1Aq zM_aIo>A;xbz(0K-KEx-#>@Or+vA8*89VP>hO-_>_CSrTDE|}#~w%2N#Lya|g$(8%I zvEf9Co&54OozKR`*~OiIp0Z|Le!%zBF_jS~XBAJ=vo&A6M*3%ORDmH6=IlN0`v4Y6 z;ZscZb0-xROzOGKv3k}i=|)Ul%>J6Rd9AmLY`#{pwflC#QRw)kdb#Jsw2V#LW_Fn_ z;uXo~Lztk)zTNtV>GN%EjtrmNmmqSa!MW9|ao;n4{I@pra4HqcU9n74i+I3hQeZq8=YK z2n;{NZ^_Rw2qU)}MOq4Db}S8bhALa0J__)Qniq-J(sRP47d_W$eEJnl#Z*{mmDrW^ zKKnpQjd2r9s}6KDD_ug}JZjhw=9OIh-?0F}LZVbI3h(1st2{NTFNq#V9vDnIni$1< zA=Ts|o;PV8+E9b9GyTHz8=VnmsR6S=%(U9~=*>ov@2Rt1QhOOZmx?|XH-}ETgw`Xr z^rj^kgvk;A#?a1AcY4}<%byvk=^V^INpnmbT`Sol`eb!{@uI2Nin{9X!{_UKd5C2q zp8vJ-{Rs0~-zkR6II^9+J2ZNTe`~i6oAm>6>rGfW&Qepuy%2{SaRC+@x(N%RZVZr-5MSrgQ!$zkgRdzpYsMb(HT}@M<;vWewiMy%u)PWhG*b zE7*9lOk~o%Wa<9My4MHN*T=|mnx8EnrPK<{QV4DpYcFrzk65$JSBtFpoQOmNdHn~< zx`$o;Oa75sqPaHwu90SEwPz2b3!cVpJ0v7oNlKy|Z>%&FjcnWC0|e^fJD&~W!M0S6uQDPMXk6(D|Rp<;~}*ROQ^;~ z52?M8H=n1g2Xmq>FAvKpX%VAV!=fyg_4%g5qr&(UtZup9=Udww!Am)bXc*vQi?HEB-F-B?pkPU87kznl>BxVQUx$4FEstu(!4eeg|a#E7G0mvEjh zLrTW$CF@KB&s&CM<=0EN2>UV^G`XB|NhR0SmTivYDGLora8)&&B@dTmHXF&epHt|1 zXOG(VeXb%Gq(LI)ewexxoYa!**!I3@beBFDZ*qv+=O2hhT6TXd&Cwg@D0*jNaFM+) zj;f=A=62^sTS<2c&lrS})N}o1A{67s`Ncj^I=AYIjzySk9;19;WbA!TH9#@n*74k$ zitB@hwOE)HGqd%JA30>xuR7-Ta8GIt)TJq%EE*(|Gb8%8Kni=D3(Py-sQ-}qU6HbK zD3DS*oNuS68;MKLQ0V%&vpf-;lSl5xB*ZB$TV9I9E~caB??D9XS0>B@9J$hZr5V2D z`DKTV;Tdf#q-u@W2J2D=znV@#O9-A3M^amjrD2}E6t!p#^l?ttfv=7AqJPo8fI)sv z#qUVGq;=Wqq0T{|;7P0es(GESrlT}T*V67KG`DS>UYUUClv6a|G#g`W^js|M+Ok;( zWu-u@wDtcM9y196QlV1Fk%JFGa30oLu82pGof(%0EHpz$q55_uMH(!*66mO}QMO&&VnO$HTPGZ~qO3OGUEFam za86RcQ^7(YaI*R9&pAt;E?i^kp-|zgAY}2U$j#Tw{7SjoxC&oJ&yvZVwB^~n8<7(X z!fqUQJ}d@~Hdf56S#>u1Wv>@3kYIWLPr0eL=VY^3$fZ(%frfD*c+twD|?k7hz6IIDdihl))9tNMEJ==Weasu})LbJe0e{ z8#}elLzT-iyGHigL)`(=KcJXNU+d}WgJ-zT#lHq1d4S#I5+@aJl@4*eE*AqT&v57G z46Y(pf+R~o9z9AWK<%&c$+s+<oRwD03OoXelTly`h$!VI*}e$HWBJmXM#vnn{qj!b%y{VbiC4NNW)TM2g7 zWRK$f(V7NxB=N`i0}3&;VDn#>p5v~9#3TR=@eAHPrGY*wfuta~h1>y7|IOw9fW!8r zo8TW{pcO!Db=anm5fJH_3I_=$05EfgjwcyTQ4dOoX@Rp+!r(YmJkN`@xlRiNoOV*O zf2GV0W;^OgaGOA_JINR{0nsX#u8+fTZRN zb|5esq(A}5iwGr%5ocGV_n@*0@PzU7IeI_wGIz-mK;UZ-@`zdh189K2SO4w8zp(!c zv`_uV0|&Px*1({?&_a~qIp87Y1wGj2#|Jf0VDbN+!Pd}?sq6Q?dK~#3Q->m-M-`T( zg$G?dA7<1y^8u?o23{M*fZX~Bh5;I+S@!KP=IDt$Kj$#}$+Qw`RlUv5J5e4m@VYc-PMxcHiH=P6MQ~ppzWC3X4d3Aqzym ziHP5)Y)A@d&D%0xZ^s*P7TiefkT=sr`txxP52;ps;aQd!om_2Zg;a1X}uVStTj4z9Zf8^;92O1L@Ls|WRYhF@0-f~ySpjtFGoJRc!% zPx(%C!|<#vk{VtyPRL3;4e?Et5UTHz9Ff)X==+3^RQ<;!~UVa_|RL~ zJX1jU0#=jTW6>YOQYJ+j6>;69i z#nw+6h{~Q@Teq(Q8LdiH8D*<@kdB3lj@o#|obZ&1PpyZG!|XmS zLAtwM#j0lcQwF)D&*;%5@?P@RJ5;Ef5@RqxZ*CiLNz*D-50;A8Y~-P*DF0V`-~H9Z z*8Uq*L`6l89uO%}P!U9sCY_+7CRuPm=YO0( zu>$*rYY1q9%UF_4ZQUF0TnDJGli?o1CJ(44Tw-i^LE+g4)eb>UrsF7;gPs5E{sw&S zHF=s#S~Uvr(~!fcV{l`rG6nPCzAB}5&81v%^z-P*+~bz#uq%%SDC(EEzEnSgZV$`e z$zK~ndkMR8*3EoEPQkiunekk!(xFl_i8AKUbbwWkx05xy&T$aso;op&Wf&=^WQ5%2 zx@~+8Jk{OMzkz<(gR73b7eTifh_sy^BKZ;+-Z>~S09bsr9?}SOz8Juj!!+r3{rdO3 zwP|E|0{MB)R6D9_a47!t5v@K*UITleC+LLQ#Js>mDJ8i11+^77qSkgFJh*ndLP7fK znIrK1+^!aW;-Rj{_g;-32Jl_j`;Gc`AK$MtXnfUJ4_~)$Kdr9+xN-{r0@EsUPWI1$ zPqGB9AY;RQKs6Z%lu=s}@@(>_0i~fO`H%n!VB@K!ha{hKe=v;>Cm`i1Kqj@_a`vzb z+ML;n3#&fAjJr=W?>{yCcdR}pE5-xy@jXtjKQrN~46bgXb*oJ)rA^Rq zo{8v{-bGnEZq0!vbq-wu2N;_5KmQ2Gy}M2K`O2y-hkOp$P`j(IA$ehNy(;Lb(c1_f zQ0<^a;NNal;GcJYE0;TN3&+;2?IgsQgbpyPH-ik@uWL{83jWF*NsWjJh#Kpgd%ohe zBgkkKE-$v!jFud<+iBUl!) zkYrrqA<0m+)|iPYEl>8{_ZNp;RD6IjrHEHITau9R29#(mrOLrTnMmgV>NCmBFAi;G&PA>GA4 zGIP@GpFbP2zMoR#JfZbHY(@DnsBmJtKqBCik=o+9n)9g=Ko&$?tc)LiV~Ix@UH9I) zJG;1~5}%$^a;C(`wo@sWI}Hbf!6V@dfj4qdt-1Su#(_YfJ2@?c%E5QPg;#I>iLZ$; z33%nbrk*TOcGSpccUB;)#ums+2L`^A`QKpR9db4+t zM`^|{Jg0Zj&fu+9-p$R{dA?X)!<4r)K|PhAegKmqrii|vN$6qtoZ^DA5AN!lCIzc| z7&XfI{b#x`+_USFhcp>qa!zfYkKJ+`d}&(?h-AlzSA_C^j{i{PZ7x4M;fHxCnt=yt zzfF(?fld@FnXC1zxDHh8=DC*JzK?>>`*K+F_$H&gZZ=pg}p&h~kb3ry^uf!&e> z^V-10+j(rKjI-4JKvjgtgW!DSFCiZhKycC+^;dx;(5DFKNl~TIe8P2#z!DIrr{uKz zphrN)xslQ{O3zyMHSD|a?9%QbWFGg&yL}hDl)RLL%ztw`HN<~ftsFD;ta*IC-ePX- zvjS3vdTW2U^`Q}wUwR%lzmxn?dqEAGnwaEu>c+AFa=)8fhvn0zR|hDP2KBLlq9T6}eps8(7c@2+{s z=0)-jR-@M|Ob91mtkw?BaE-$9NQ| zg-W`KFt|=k-dIU-iM6;vdgOJ{z4Uj&53EyCBnEa=??`zq*z38VgyL!x$T6~Rpb)tV z|G=1`&#FbDM}ut4Mb5P?e^vp=Jw0~@!@p?ZCdL;wzS}*ny=Z*0c?=?jyKxB~b5vo@ zYn0t|ZYh$vIf8~xkTgIuElT<18nmXD+@;f8a=YOTnC?yXJ#&hxJpZQT*;dMvMmuv# zYmcV9d}tqwG*iZw0xd?TMP&7cG*U#@)CNTsL@!*u2^oo8)s?5&m_rg25KrC`uZCF; z%Qfu-(d{^G4|#DpU*ckpNY))Pjy2O6Fe@yb5IJ6-S=!1!@(oCSDg=@_3ocfH5s90G zfrDxvqTda8u~&49!!n_%8~x1PlP}n)dx(r zJY<~?8zhcil9M(lLZz)L>XE@`0ONS(_yLplSF^P(@tGA~BFwVLle357&fA!m#*;0y zWo64#B%IYe1ZHDE?jUHUNX9e_pQ&7v!RgeH*|9x%G^qPRYJC=nT;4O^8>ga2&WIg{ zMaJRJf!No~OBo!xzLACbpCFH$0jW(%w17qI2a9v=6W2mdO?5^JORPQLB`aS;d*z`! ziwpEmT+2e=V69Ez&96_eva{Y3>qr!tMKMI3o zsD${`LyrkKt_ModO-o7~u%5TB-0v_x9w|CzU9k5vIkZnqshGQTbjYlDuZI9$_y*<% zIoP&OR)>V|O4_Ekkxm+3U$~4hum`Ql;rolNWf#$fwi}n9Zc74ONhs_m{n!#aSD9A| z{t&GnRTcDJH)V>O6uDg{())$w z6csud(`KV(YIDr`Yt!@k4iwLpTt$$eI`IOJOQz1`P0tN>Ko0W+N}&Q$&In5mndM9?p8YI>ontCyMt=H;E6 zYNDxyzO~U$P9mcjfhwAq!!ter_3z8$IX+U(4_G}W)mM%I_G>CZ)G{&;XF^&8)H zF;Z_`52Ruf^rouqUN*E{C4Uf~K<`THyKRdD87F`iq?IWy5V#>eaf$OD)ZvO37bpF6 zF=b7r-ypx)==g^bVKCE;rceK%KD&R@pb6_Q)e)*hZjDr)vG`=Q(Z*`{$gBV9vJcw# z^T6|;-I~{eERwrWZu@J2RtfxjKz%}FUnJEaRsk*R?aQef;r<3gn&H{D1DN>~-qjvxoTWjaH(DOV16{9UYnO$Zs1ZdtObz`XaXu z1fBbHPT3uTFWHh2M_JJ9st=kx!geh>*AxM2GXiqy^cr11ohC0(2_Jl2A9PzU#ooWK zK679dZC9&NxoF9BFv{%>$e~{~b^VxG(D=|TGQ~b|G(N&;x%Y;-(nn8FN7u&be~ez+ zbjOdDv~+`=v*%xLppg!dV{HT7zab%y6^q&z&}tQXZ{p8h8M9*Xf#{4Y=V?$(6i0y4-o_NmW zt-19xz2$dB$cRy^BYXZ%m1M>zzMM)g4$K;Pk#SDREiW^4q)JEit|$DcJZAI9MpV`* zmjYwH-Cyx}#pH&}0cUecFq==Dty3Y#45+oJimf#RTaVl>qvBCq@i8;4JHH6qpW4H) zqphgKKD0ULsoKp_OQ^b}N9KmsUWppev0Lt`eI7^_`t|OdmlZezb0?!p+V|FyTLp?0 zP?f%>rc@;^*lHbOaQEQ{n*rP!G+?m)`~OHT=9nT#>Jk-YLsO>5zT;_ zmk}jY-QT6wF9l)TZkm3~lq)hQ?bVkcp@VJB1@>etRb%FRK-07R(UzjWMnl9aX6E5x zV*4r$2HaB-We%zb1V5MO%8{QVk^Y<>w<3DY7xPV_U)Nh{g0*UgSNeE;`GRGV%4$vB z>L*wFrWEdP0upWxr5{NN1X-7<5rrlVtN88h6HjK3m%cC zQ|z53iZRCXG%)WSt>>77y^^KSbN92`&T<5kTT&v zolJn~M@T(^$~fK(E>8vimx*V#Wq9?c+8Uzx_Q7^{kehbVD1ycEq1)rpM2r z83@YXQ77G8W`{#O{~oZWlpM}oiyky2e=0aYtOuJ&kM_g#u}L(UFgdsK^;$7ANJMvl zoz9j0dn;Vcv_Ec3;18a1+rOs1@3f#H&q-Jnt92jg5icUvo}zDyh&@@sKU1^k)gLjN zl1qu}nE9x~B~<3KJM{M4>evakkLix1CF`O?XUNuGh1N}2`N*tcaTt-fI(ZpC;bq@9 zoAFygJ{B)^?`Q-zba2+OGEI%89vjn~-pGAq3Qy6i<_kW^fU{R{yK*uPv>q$+diw=zM7SC@Z?cx-U4Eie~tjmqCQUC7FHaX{W0#tef6O2*NMdV7}6Z zclJx36(KR|!YXlB%YQ5wiX0uyWP}1Gtl$b6>5&EpK%iJW7Ob6#N2x!oLYV2@LP&TL)b^$+cY(d1;+G8YgeBMp{0)Ci}Ma)TN6K%#ff*gFZRL8bIxG z;{VXylspo=I4YKX%&o05@g)t*B3|Y_8Z;bzi&m^Xju)>^^-v(`3J&3&J zLHsAgXnW;>F7H~=x4Qup!WM0DO#g{1(vxLQrtj%#eE;7ok$v(ZyJDLmM{K7i!|$yyQQbrP62^x0Yg|tzIuddZK33LIm< zZ0S+uh(DEfBc>N= z^sb6XNRJ5#{epbJLID$cXXP_!E%+k&W5qWDkC@*7^}!~sNYo^4|DYfxDLF|uoyL|+ z7k`=q-9jsmLHE4kK|LU!${cd@@Cx#aGM*Ks*(?$A1$c6G(PL7CDg#3|HixV8)~;N56NY*3wBt(y15 zrLW2K7fYc==b+Z+6fF#tI5#wtIpBs}!5DUs{{R-8QZXZG`?%(wboYR%rM%kUq85uy z1&8K3Gt#@GAh0|+CJ$;g{P@drWISWZra}Tj1l%DFr*{6CT9J>QtIZnd8HZ=r`q*%D z%6sFPlFV3phJlRWJZL2m%}DRH74mo?@rpMJxf@W?azz0ioj?Hq;om6|uZ$e^iAD=( zx>>?3(4_dDzS{fh`P*D*)u$*f)mBPko--A{u;iT4V;SqoF)!%~OnX6OY?z z`RFZlxtylr?n2DhjLYvASpfe|=ZjBWa6&xpcFxKLU}DPyCj2w!NXw4QkrgilP*Nnh z?=;Z_;6QPhyPSg)7I}AiH0&s-!~M6&Lh{9{U6yQyI-tgk+h5Xh!2U|bgrx#dv;!jO z1b|KC0&GISs6TP>!WI=>P-d*rPLS#MulH&}pof-+MkE~*mfBR1#&&cF8I8R09ls>! zJtm^S5?`-~0b0{r1_-gQb>DKJx~9jFuQbHG)CYl9awT4=ReyBJzwzCtk8QadLg1Q$ zJ|oj?%_CBhimYv$ANq91y05&C>0llGN?+$VYUU5 z!nHsY1AKbdA9jfqjQa}B5Z^#!FOc7X#h<#kgQ9n18^pVPY;}1#g=?Y*Y8?R(W(^^m zka%EE^vzwOwSb^V9G$8z4-lSM=ynWvbNfK;HMk}H_X6I^U!cwZT;OKqQf0< z%sN=U`&2yX0DL~f)i|tH3*Q}aBunv~sBv7|?rgzlccn|W`1%QPG4ZFoe{bxGAPJ0H z07#FvfCkH7D}{On2M7V7nB4#5PWJ8s@o|2@EWoqtzemPp4m>rTM1<=BfE)4Qf5zR` zBjZ(po_A~MyQ#_lq4~dDeZzmg3Ir0j-2Fzt|D5T6;UWN?KKvh#`(G~oKV|r$o6s$D zil?Kn5NHB07h=Pu;*f(QyXREo`LtRU&}$n`-mYs=m^sj#Cs!-2R6Llp?NhkN_L6VZX(2Ug?##chn@*Y6@ z=%XDY1PGrc;rbxSEfHDAU;bkeJj&HCm4m^zai7;-}VH8l35 z6?h&?!bN3sLjJ4}r6zuMueVz9MOgVP%td{O1)F}i41-a6+wU!Rt*`OQCt1e}FI|Bb zsu5VKn0ND6dolIqHr)3y-ERTq|40Je-*n|_prLBfg+}BYs ziK$qo23}(-4*_pVg}iH5nZS1kkAtN-p^(E^hz2%3X$d!7!wt-wU-X>N>nbQiuoWrNDX*-RK=K{z=eUGAb>rI^G)Hi+Wm~csPWI0w~3TG|JW0F7R0SOd= zD3A8hM%5!&h7KkJR~gti+bA7W(K#FLem9NZVoj7{sqniVKhylBS@oZf1ebpvq3T4U0i+=~FV9I<2WmQLKiF%uwb_+fhhP^^LQC`N zYw*sAmQ24&xompA<Na)sQ@vj(Qh~R4zXVQ-``| zNbZmr{TVB7e19g0=DvnWw>2r@nhD+o79~b1iYXALJ}T01*q88S^~w``>9^aT)aKVE zND<7bjp=?g#V@-Vaz;$ewa|NE5Ar5rX>Vi2H!iP1DvvxE!g)cA=*Jvce0up_HMWzP zZod|yZj(?ay%(8C)%M}Efg6&H0G@JX!II zMdDKKd}(fBB8M`Hkh2}^ab28o3|?DhJq^$Un#yIaE{7r7e& zVU(w8h3d$IYE zOntdn<8bf$wLP(NP}LYSH4V=3P=T{v|Kv@$1|VNC(wWig8#oU7=PlK+ex#t55ASWr zQ^r9}cU}fXyB+D?ZVJ@98C%p(`CxEniC$;kGfLX zqCkb|f4VMtd=R^lhG^&WEyI{~G>?#zZdM#SMMlv^K5S(FY2KAwZcEj8f-SBfck9eF zO-v2zXb-8u5%U_>9hklk5Q{W_|9;*TLYR|rYLXLcDlpwh{HYBesdJB3JZSl!#a3(l znAP^kV>LdxEH=%D>9>keF*Fm>YWyPnV0j@*&dqaxNJ@SSH~T{OjBsd6kVM)l@|tlm zqxrZ;a^f{~2YyNt_#UnM23)@B?KTZ9in6@4j~8BMolR|+dfVv)D``zrJ*j&UM&h@mzjC?)$2YeJ zSK~qjc9N%|OdQ?do>>@^dPV+dxA1rSFj?C~$*UH04zTQ)n7~70jZDN_&wO5g%|9+K&RDS=YH#_DWql6oH-2(Ch0j z)*``GkhWw3s$B=#FqOL+si^`TPpjuN4w1&zq*E<*T5PwR5XqymbSg_{B=O{?qB51= zD1hG;=_~ENTtA73JP-X>XXhqDaa2DY;z-9*!0Pt+lI5VB*eA#w+tJ1eas`GE%+(=i zo#$*F?rx~I9s@YB+OErN=^a0xw`}1|NAZR0mM!kVz2&Hlb>Y&%wjK#ofQ&nI|KKfV zgwfh}%rh{}`?^0lf?QYnq%@V}t~oVf)6CW87 z85xP#d>kNp%d~X?%;N0KwdXWp?`tZ`Uhsy3U)AAJD$wPlyf+)00cWU-gXG}gmL2`x z*{nC^VW%q2bI!pZJnmd&YdYwWd0dNb^(%ON$C%YHO&s@4T1HDt(`cQ!*HR-3%?Xj4 z%AeuW*od4hU5vqt*N#f!of5BLN>_sy>{SIR2Bv&f!bn>mHSw5xV{qk^T$6SLvVWM^`nzIM0XQW~ZQLRTW48_|p!r`1syUc@)+k{o2f( zzMeyOGGDfDrC~MUZa-Vgi2>RR^uC0J{cov)2DXhk1)c>G&?y( zi@TBZ!I|tEVIuhUc+yW_2ZqxhaWal&`*LGA5YT4wrS4 zZB}9+QmUsn@n_l24ZSO{oXE6gDmU+{FUWZE&EgxU;yRfutjB=k{y`>XZ^m`#&qq$( zWs9_Sz4ln6OU{1U>3W?{+*4I*z+B!r#07`z3M2DFg4EaQYbX^gr_WiN9*eLR8LXTN zmu?`B-F@xf$@0(kS<&HWPD&gfBw13A9S^uqYNXlzab+a&ZA7>!n|9fLakwFsLT(=F zEGrNzX-OW%KUo?K_T%}fzrom#_-m1ZI)ht+PX2ZV$}Z}EaAPV5c)+J59_0?c#+y;6 z2Y3-pebCs(X2OF96JG4#1V$D0x93zkDro-FGVb!9(D_)0Hl3}?m;I|()UkOHZGqE= ziPE?B&fQ^~+T$CD6Az#BYghjxJ^##Y}3)&X3IO*_M-s;fH^Be*m4GJC}7fv`-OiBfsru zKEqeUWV9t~&)8<2XcW;gjl+KitP<*&VT^KZ=rQrf;GzA$uMMjOjRhZgm-FM&Y4Iu9 zsa8+B(;!cXZsOj;zzV?e1bIr|Hxl{J|Nm9~+yBYF=l`euzhe{s53NXn*fPA?Q!92M S8Nj7Lw{KXPR$ue{>%Rd4*rW#l literal 0 HcmV?d00001