cancel
Showing results for 
Search instead for 
Did you mean: 
Dominic
Employee
Employee

Join us as we explore a groundbreaking shift in enterprise technology with Jeremiah Stone, CTO at SnapLogic. Discover how SnapLogic's innovative AgentCreator revolutionizes the way businesses automate and streamline processes. We unravel the rise of agent-based AI systems, a trend embraced by tech giants like Salesforce, and dive into how this approach prioritizes business goals over traditional coding. Jeremiah shares insights into how agents  represent a new domain in IT systems, and how SnapLogic is already delivering results in this transformative field.

We also investigate what is required for success with AI in business, emphasizing the importance of collaboration between business and technical experts through pair programming models. Jeremiah shared some real-world success stories of AI initiatives led by business leaders, showcasing AI as a powerful tool for tackling domain-specific challenges in areas like order reconciliation and contract management. Finally, we discussed the need for agile approaches that minimize risks, accelerate innovation, and drive significant business value across various functions.

Resources mentioned in the episode:

 

Transcript

Speaker 1:

Welcome back to the Enterprise Alchemists with your hosts Dominic Wellington and Guy Murphy. Hey, guy, hi there. It's a pleasure to be here in person in London for once, rather than everyone in their little study. And even better, we've got with us Jeremiah Stone, cto at SnapLogic. Welcome, jeremiah, welcome to London.

Speaker 2:

Hello Dominic and Guy, great to be here, so thanks for joining us.

Speaker 1:

What we wanted to talk about today was well, I mean, we're recording right now, pulling back the curtain a little bit between the two big integrate events between San Francisco and London, which kind of mark this part of the year for SnapLogic. So why don't we talk a little bit about what we already announced in San Francisco, what we will be announcing in London from our point of view but by the time this comes out will also be in the past for our listeners.

Speaker 2:

Well, it's an exciting time for SoundLogic. We are really kind of firing on all cylinders. We historically have aspired to be the premier platform for helping people get important workloads into production and really focusing on the impediments usually driven by data integration and application integration, orchestration. And now we have, you know, this world of of not only our traditional artificial intelligence but generative ai coming to the mix and you know that has sort of been focused in the worlds of applications and assistance and I think we see a lot of focus around adding more flexibility, autonomy and the ability for these systems to work within goals and constraints rather than hard deterministic programming. And that's kind of being called agents.

Speaker 2:

And we have launched our offering to make this a reality and it's called SnapLogic. Agent Creator Did a lot of really good feedback, good support, and we already have multiple customers in production using it. So it's sort of as a product leader. It's the dream when you can launch something that you feel is fundamentally meaningful for your customers, for the market, and already have customers standing up pointing to the value they're getting and saying, yeah, this is good stuff. So very exciting times for us.

Speaker 1:

Yeah, and of course, we've seen a whole ton of other enterprise vendors also talking about agents and famously, salesforce made a big deal about their agent capabilities, but pretty much everyone is moving in this direction. Why do you think that is? What is it that people should really understand about this new wave compared to the previous?

Speaker 2:

chatbot, let's say wave of AI.

Speaker 2:

Well, I think this really gets to the core of why we think digital systems or computational systems have been valuable ever, and that is the ability to automate labor. You know drudgery, toil, I think, as you put in a former episode or from the ops world, you know, we want to progress, value and outcomes and we want to, to the greatest extent possible, automate the necessary drudgery and toil that's needful to be done in order to do that. And I think that whether we're talking about the tabulation of votes this is voting week in the United States or we're talking about the calculation of taxes, or we're talking about customer interaction, it's all been the same thing. And I think what people are very excited about with agents is the long-held promise of digital technology to have a more and more natural human interface for the instruction set and the outputs of our digital technology. And so that is a long arc.

Speaker 2:

That's been decades, really back to the dawn of business or personal computing, and I think agents provide the opportunity, at least in theory, to specify a goal and constraints and then to have a much simpler model for execution rather than deterministically coding every step in the business process, and I think that's very exciting. It provides the opportunity for business leaders and others to state the business goal and have the system support them, and that's an exciting promise. I think the other thing that's happening, though, is that in some ways, it's easier to market and describe, because when we talk about an agent, we have corollaries in the human world. Whether it's a sales agent or a support agent or a travel agent, we have this idea where there's a discrete set of responsibilities and capabilities, and I think that's a nice analog from a marketing perspective. It's a little bit easier to talk to than a language model or a predictive analytic.

Speaker 3:

Jim, we've both been around the industry for in my case, possibly too long now and we talk many times about data integration, application integration and the convergence. When I'm talking to senior architects, there's a growing conversation, awareness that this may be the third integration domain Very much around the dynamics in these engines, the life cycles. They are not an incremental evolution of what has come before. This is not a Salesforce compared to an on-prem system. This is not a Snowgate. These are Teradata. What are you seeing when you're talking to IT leadership about an understanding of that and also what's SnapLogic's vision of that? A Snowblade. These are Teradata. What are you seeing when you're talking to IT leadership about an understanding of that and also what's SnapLogic's vision of that? Because we are being a market-leading data and app integration. There's almost a why. Does SnapLogic have a point of view, with the technology, that integration is something in this new world?

Speaker 2:

I tend to concur that this is a sort of third domain from a problems to get systems to work with each other perspective if that's what we want to say integration, automation and orchestration is. And I think what we've seen evolve thus far is that when we see a new domain it means we have a new set of patterns, a new set of problems, a new set of challenges and there is not a nice fit of the prior approaches into that domain. So let's just take it sequentially. Data integration I think you really look to you could argue to be unbundling the System 360 as far back as that goes, but from an actual market where we saw platforms and products emerge. We're looking at sort of mid-90s really, with the focus of the in-man type of architectures around data warehousing and looking at time-invariant analytical stores versus the transactional applications that fed them. And those patterns of integration were largely one direction and we spawned acronyms like extract, transform and load, etl, those types of things. And those were largely batch-oriented high-throughput, high-volume but relatively predictable workloads, high volume but relatively predictable workloads. And then we saw the rise of just continued fragmentation, despite the suite versus best-of-breed world, just the more and more focus on deeper, broader capabilities for different professional groups and that created the application universe that continues to expand at an ever-accelerating rate and with cloud computing and software as a service, the emergence of things like SOAP and JSON.

Speaker 2:

And then a completely different paradigm like chatty, bidirectional, much more business process-oriented, and suddenly our one-way suck it all out, mash it all together and put it in one data store didn't work anymore. It all out, mash it all together and put it in one data store. It didn't work anymore. Now, you would say. And then we developed much more fine-grained messaging message buses, ways to have chattier conversations, so to speak. Smaller messages, more frequent, multi-directional, a lot more mapping involved.

Speaker 2:

What are we seeing now with language models? I think, interestingly enough, when you look at the world of agents and AI, it is a blend of the two. In one sense, there's a data workload that is ETL and that is taking your data that these models have not seen before, have not been trained upon. Therefore, they have no statistical representation of your data and providing that. So you hear people reusing a lot of data integration, words like hydrate your language model or those sorts of things I mean as as a, this is my field and I sort of bristle at those terms because I don't think that they're accurate, but but I think they're.

Speaker 2:

From an analog point of view, they're okay yes, reason by analogy a lot it's not a direct cognate but it's a reasonable analogy.

Speaker 2:

On the other hand, as we all know from having used these systems, they're very synchronous. In a large sense they're synchronous workloads. If we look at an interaction model, there's often people involved in the loop at so-called real-time inference would be the discussion, which looks a lot more like an application workload. On the other hand, we have customers in production that are doing large-scale batch workloads with large language models to do things like update job descriptions across hundreds of positions in a formal position hierarchy. So I do think it's a new world. It's a new world that is largely document-centric due to the fact that we're dealing with lots and lots of text. But computationally there's a lot of data integration in there. But you start to again.

Speaker 2:

The analogs are useful but they kind of break down. One example would be I had a customer talk to me about implementing a lookup against documents which tend to change, but only subsets of them, and searching for analogies. So this is almost like a slow-changing dimension inside of a document. If you look at it, a certain clause changes, but the larger document structure stays the same. That's an interesting analog and then you start to think of different computational approaches and how does this relate to SnapLogic? And you look at the larger market market. It's kind of fascinating. You see technology providers that you would expect to be doing interesting things here doing interesting things and say MongoDB, for example, having both a vector store and then a cross reference from the vector to the documents that created the vectors, which you can then start to talk about a slow changing dimension dimension on a document which is pretty fascinating. So a lot of the database centers are just adding vector stores to these sorts of things. On the other hand, you see the emergence of completely new capabilities, whether that is, the hyperscalers offering their own model gardens, where you have many different endpoints, or new entities like Pinecone or others coming out.

Speaker 2:

So where does SnapLogic fit in this? I think we fit where we've always fit, which is to support people that are trying to develop important workloads by helping them to very easily bring these different pieces of technology together to achieve a larger goal and to do so with a very low change cost, so quick to get something into production and inexpensive, easy to manage it through time and to understand it at scale. And I think we're very fortunate in that our core streaming document architecture is mapping into this world well, and I wouldn't say that we're trying to use an application or data document. Architecture is mapping into this world well, and I wouldn't say that we're trying to use an application or data paradigm to develop into this. In fact, we're developing new technology, new ways of working with these technologies, and it's fundamentally changing psychology. We are changing deep things within the platform in order to work with this technology, but we're able to move very quickly because of the way we operate, the way we develop, but also because of our core processing architecture.

Speaker 3:

So taking back into a more enterprise architecture point of view. So if we've got the third domain, obviously senior architects have to think about how they onboard this technology. So the types of barriers to entry, again, there's been a lot of interesting conversation about. Obviously. Is this just a new widget? And obviously, from my point of view, no, because the power is, if you say, this ability to find patterns that would be human processed and traditional algorithms didn't.

Speaker 3:

But there is also the challenge that some of these can be non-deterministic. Sometimes you get strange answers back, also from an architecture point of view. That also opens up with the pace of change in the market. So I know that our labs are moving at an incredible pace and on one project, one of their lessons learned when I spoke to them just last month was don't be hung up on the model that you started the project with, because it is probably not going to be the model that you end with Again when you, as a CTO talking to other CTOs, how much tension do you see with organisational change around this type of thing?

Speaker 2:

Well, I think, the implementation model that is emergent is a pair programming model, but it's a pair programming model that is pairing a business process expert or an organizational implementer with a technical expert or a technical implementer. We are taking a very experiential approach to this entire market. We seek to be an exemplar of applying this technology in our own business, and so one of the first production workloads that we had was led by our senior finance manager, nicole Hoots, and one of our technical architects, chris Ward I think you've had Chris on the show and they paired together to discover and implement this process. So I think what is really interesting is that the use cases that we're seeing go into production are not reimplementation of use cases that are already in focus by the technical department. They are fundamentally greenfield use cases, and so if I'm looking at this as an enterprise architect, I should probably be honest with myself that the way that I've looked at architecting for problems in technology for the last two decades may not be particularly useful, because that has been driven by the same set of use cases that we took into client server, that we took into web, that we took into mobile. This is not the same thing, and so, falling back on your domain expertise and understanding those business processes relative to the technical requirements that they have.

Speaker 2:

I would caution against that a very iterative and agile approach and delaying hard binding of architecture as long as possible, because we have to admit that we don't necessarily understand the domains where this will be most successful very quickly. On the other hand, the good news is that all the work we've done around flow-based project delivery really works, and so if you're having a tight, iterative pair programming approach between a business expert and a technical expert, you can go far fast, and that is our experience, that is our customer's experience. You know we're now passing half a dozen, you know large, global, complex organizations in production with workloads that they're finding very, very valuable. So I do feel like, as an industry, we're turning the corner from shiny object. What does it do to? All right, drop it on your foot. It's taking costs out. It's creating new production. It's earning me money, it's saving me money. It's managing my risk. It's doing the things we expect from enterprise tech.

Speaker 1:

And that's very much what I'm seeing as well. So on the plane over here I was reading a paper by the Rand Corporation I'll try to drop that in the show notes for listeners to review as well and they identified the major cause of failure in AI projects was exactly a disconnection between the tech and the business domain that the tech was supposed to be applied to, and so that pair programming model really speaks to me and the conversations I've been having. I attended an event in France and it was notable that all of the successful AI projects that were described during the conference were CFO initiated, not CIO or CTO. They didn't come from IT, they didn't come from, as you say here, a shiny object, what should we use it for? But from I have a problem or question, a difficulty in my domain. Let's figure out if I can solve that and, as you said, these are novel problems because they weren't a good fit for previous ways of tech so interestingly, I can share discreetly an anti-pattern of this.

Speaker 3:

so I was talking to one team where their evaluation of AI as a strategy was to give it to a junior developer to spend an unamount of time effectively testing connectivity to various public AI models and this would have been how you would have done it.

Speaker 3:

Yeah and that was it, because they just saw these as black box things. And when we spoke to them and said, well, what are the use cases? The use cases were largely defined. Several of them were document oriented, but what was interesting was the person who actually owned that document process in the business was not involved in the evaluation whatsoever, so that that, for me, was an amber to red light about that approach of. This is not just another database. You need to finesse with the business to understand the real power of these engines and, as I say, in this case, it would seem to be more of a tick list of technology rather than a what does it really do for the business?

Speaker 2:

no, I think that's exactly right guy. And um, look, it's very, very tempting to try to control as many dimensions as possible and apply technology to problems we understand well over.

Speaker 2:

Over and over again I see this when I'm working with companies and customers is you can almost predict someone's professional background by the things they talk about in a meeting when they're discussing the things they're working on. So if you find somebody that is really focusing on how they manage I don't know customer interactions and service, you look back in their LinkedIn. Sure enough, they've led the CRM group or the service group, etc. And so we should be aware of these biases that we have. But on a more fundamental level, we are biased to linear, discrete state and action models for the processes we support. When we think about it, how do we even talk about this? It's always order to cash, candidate to employee, gross to net payroll calculation, availability to promise from a supply chain perspective.

Speaker 2:

These are all very discreet state model type business processes, because that was the domain of things we could handle with. You know, referentially integrity. You know referential integrity in a relational database, right, so we can actually handle state changes and we can handle flow through this world. This technology is the ability to compute on not well-structured information on not well-structured information. That means it's very specifically not going to be incredibly useful in those static, linear, well-typed, well-managed processes. Do we see people having success around doing document processing better. Sure, absolutely.

Speaker 2:

Or even handling image processing, where the image turns into a document Sure, that does work and look at a certain point in my career I had to calculate offsets for, you know, check printing and reading in pay stubs and that sort of thing.

Speaker 3:

And it's misery, and yes it can help there.

Speaker 2:

But the possibility of these technologies to actually rip waste and cost out of the way we run business by enabling the most valuable people in an organization to shed large percentages of their toil, is big. But that's not doing order to cash better. It's a completely different emergent space. We're not talking about payroll calculation improvement. We're talking about automating payroll reconciliation. It's a very, very different thing that today can only be. You know, heretofore was only managed by experts with certifications and spreadsheets right. So those are domains we see a lot of impact in and it's very exciting.

Speaker 2:

On the other hand, we have to admit to ourselves, or I think the enterprise architecture community has to go out and upskill fast, because the deep, well of pattern skills and capability that we've had in the architecture world does not serve us particularly well, and I think we should lean into understanding that and get our hands dirty quickly, get as close to the coalface of this as we can, as fast as possible, and the ones that are doing that are reaping unfair rewards and unfair competitiveness and delivering unfair advantage back to their businesses, and that's the good news is that ultimately, this is not magic. The boxes don't have to be black If you get close to this technology, you can understand it and you can wail on it and you can go fast, and the people that are doing that are going to put a lot of hurt on people who don't.

Speaker 1:

Absolutely. And yes, being a buyer from experience, I end up talking a lot about the risk of shadow AI, of the other failure modes not the techies going off and doing their own thing, but the domain people going off and doing their own thing without input from technology. By analogy with my own experience with cloud, we talked about shadow IT back in the day when developers would swipe a credit card or marketers would sign up for some SaaS platform and the compliance people, when they found out, would tear their hair out.

Speaker 2:

Dominic, that's a really, really, really good point, because another thing that's different about this technology it's incredibly accessible.

Speaker 2:

Because, it is primarily text-oriented and because the academic tools to explore and work with this for the past several decades in natural language processing have largely been used by researchers that were not programmers. That means it's very accessible to non-programmers, which means that the risk surface is enormous, and we have seen people and, unfortunately, professionals inadvertently creating breaches by creating a public custom GPT, for example on OpenAI, and uploading their customer lists or sales lists or support lists. I mean, these things are happening, and so the other element of this is how to create. Many companies and workers are creating, basically like a language model, bus or something where they can effectively provide access to the employee base that wants to innovate while still having governance and, what's amazing, there is again. These are largely API-oriented systems and you can apply a lot of really good practices from the API management world in this space.

Speaker 2:

On the other, hand the longer these APIs are dark and people can just grab an access token at a very low personal cost. Because of the amount of subsidization happening from the investments here, it's definitely a high-risk environment and those that defer this and say, oh no, our lawyers have to look at that use case first. Be very assured that people in your organization have access to these models. Unless you're doing very aggressive IP blocking with your network infrastructure and your data is leaving unless you're managing it appropriately, and even if you block the firewall, they've still got their mobile phones.

Speaker 1:

That's trivial, yeah, and that's one of the things that I think also is, on the one hand, a promise and, on the other hand, a source of confusion for some of the enterprise architects that I talk to is that AI is so many things. So, when you position it like that as there are lots of APIs and you need to think about data through lots of compliance, that is, speaking a language that they're going to be familiar with, which is going to speak to their biases, but the fuzzy definitions of AI mean that AI is both that, but it's also some very narrow features like better text recognition for a voice tree, and I've seen some people struggle with. Sure, I'm doing ai, I have image recognition, but that's not exactly what we're talking about. How would you frame that for people in that position, that architectural definition of AI and how it's in ways to not just spring the AI over the top of an existing model, but helping them?

Speaker 3:

rethink what they can do. So surely it's the markets are now maturing I'm not going to say mature because it's moving way too quickly, but actually having this pragmatic categorization of it. Now I think the complexity is going to be when we start seeing some of these new models start overlapping with each other. So, as you say, we have image recognition, we have music recognition, image recognition and image production, music recognition and music production, voice to text, text voice, llms, deep learn, specific engines and models. That, from my point of view, I think having those types of things that you can put rules around them is today relatively simple. But I think the challenge is going to be and again we're now in the world of we're hearing about early research in the public domain I came across an interesting paper where a group used a subset of open api to try to do about predictive replant maintenance as an academic exercise. What was interesting was and that for those who aren't in that particular space, is probably one of the most successful hardened uses of quote-unquote big data AI around, because it has been used for 28 years now on doing things like power generation, wind farm maintenance cycles, etc. What was interesting was that they were using a generic algorithm to do it. Now the research came back and said actually, it's about 85% as good. The researchers were shocked that it was that good. They actually expected it not to be anywhere near that good. Now they're going to go back and they said said, with maybe some optimization, they'll close the gap.

Speaker 3:

Now, from my point of view, this is going to be the challenge, which will be not can you do it? But what happens when these more open framework quote unquote AI engines will be text to voice imaging predictive, will be text-to-voice imaging predictive, as well as what we term LLM, all at once? Now I don't think it's there yet, but I've got to say give it another two or three years. With the way that some of the announcements of OpenAR making what they see as next-gen for them, we're going to have an interesting convoluted set of an engine that would be multi-domain. I have no idea how that's going to get managed as an AI, because it's going to be. Please don't use this capability within the thing itself rather than don't use this discrete engine.

Speaker 2:

I think that's a well-founded concern. To go back to, how does this impact enterprise architecture? Broadly, enterprise architecture we tend to think of within specific domains what are the business processes and activities, what is the data associated with those, what are the applications that manage that data and, ultimately, what is the infrastructure that runs on? And how do we think about managing that through time in a way that provides flexibility for the business to innovate and manages cost and compliance from a technical perspective, I think what's really interesting to me is that heretofore, ai has really lived pretty discreetly within the data domain. You talk about plant maintenance, for example.

Speaker 2:

When we think about reliability engineering in a plant maintenance domain, you could break that down into vibration analysis, predictive analytics around failure mode and degradation, predictable analytics. Well, you're looking at the machine itself in the engineering domain and then you can look at it from a statistical analysis from your plant maintenance systems, right, and you could then look at recurring failure modes. You could look at the codes within the maintenance system and you could do almost a reporting-based approach on the basis of the information that was captured. Or you could do an analytical approach based on the real-time data coming from the assets themselves temperature pressure, those sorts of things. But that lived within the data domain, right.

Speaker 2:

The business process was to produce the product, your applications were, you know, to maintain the plant, et cetera. Now bring in text-based language models and suddenly you've got a vertical slice that cuts across business, data, application and technology domains. That's at least in my experience. It's relatively rare. Um, mobile kind of did that in a couple of different areas, but not really we thought it would, with location sensitive retail, for example, that would have a different business process where and only recently, only 20 years in, and we kind of doing the qr code thing at restaurants. But it took a pandemic almost to make that happen because we weren't touching stuff.

Speaker 3:

I'm smiling. So I spent time in the e-commerce and the mobile crossover period and the idea of real-time marketing. As you walk down an aisle, Whatever happened.

Speaker 2:

And yet there was a thousand pundits on stage saying this is the future of and the closest we get to it is the latest Blade Runner right and you have the giant hologram interacting with our hero. This is different. It is in production now. It's impacting and delivering business value now and I think, from an architecture perspective, we really need to figure out how to embrace and accelerate while minimizing risk. That's our job. We're here to help our companies innovate. We're here to help our companies win and succeed.

Speaker 2:

Let me just give you a quick list of use cases from SnapLogic that we are implementing here. The number one that we already have in production was our order form. Reconciliation between our sales force world and our actual order firms themselves in production has delivered two percent higher yield on our revenue recovery 90 less manual review time and a 30 faster monthly close hard in the bank. You know value to our business delivered today. We're in the process of standardizing our job leveling matrix, typically a work that you would have had to pull managers and employees out of work to talk about. How do we, you know, figure out what are internally comparable types of work? How do you think about the reference, the compensable components?

Speaker 2:

of that work span of responsibility, technical and non-technical capability requirements very hard, crucial work, though, because you need to help people understand how they're going to develop their careers. What does that look like? Laterally, we can now I may read you about that we're in the process of doing. We have already implemented and rolled out a new workload that runs every quarter to identify unique customer contract clauses and give a heads up to the account team 90 days before renewal date that their customers have unique contract clauses. I mean, think about it in the context of private equity with contract clause management. Those sorts of things it's everywhere right, but we're implementing it today Feature request grooming with enhancement requests from customers or feedback from analysts, internal bugs supporting that Marketing, search engine optimization, generated stuff Internally we're.

Speaker 2:

Now we have a process whereby we're taking and scanning through sales call logs to automate the fill-in of MedPick information in our opportunity management, and I've only gone through the top eight. I've got 17 in front of me and another 20 below that, and we're a software company, but I just spun through product development use case, finance use case, hr use case, support use case, legal use case, and that all those were led by the people in those domains and not by Greg Benson, our chief scientist, coming down from his ivory tower with the tablets of the law.

Speaker 2:

Exactly right Pair programming between a subject matter expert in the domain and a technical expert who could implement it. All of it done with zero code, just let that soak in for a second.

Speaker 2:

Business value, business value, business value, business value at high velocity with low code. So this is the world that is possible. And yet when we go out and we look into you know most IT shops, I'd say they're at best somebody's in the corner, whether it's an intern or somebody who has a bit of spare time on their hands. You know, fooling around with seeing you know to your point what are the things that are technically addressable. Internally, the theme of my talk is how to build and accelerate the future.

Speaker 2:

Of course you use the oft-quoted William Gibson quote the future's already here. It's just not well distributed. But over the course of my 26 years now in industry, that quote has never felt as true as it does today because of the just vast gaps between the level of productive application of this technology versus the sort of speculative rumination of, yeah, we'll get there at some point. In this point, I think we are. I think next year will be the year that we see major industry disruption where people who are adopting this technology and putting it to work are able to deliver the goods and services at a dramatically different cost basis and are able to create new products and services using this technology. It's happening and you know. To come back around to the beginning of the talk with agent creator, what we've delivered is the ability to now set goals and constraints and have the business processes define their own control flow.

Speaker 1:

Wow. So what a call to action. We like giving our customers these sorts of unfair advantages. So if you weren't at one of the Integrate events, there will be video of some sessions, not all. We deliberately try to keep something special for the people who come to the in-person events and enjoy the hallway track and all of that energy. These will also be in the show notes as well as on the Integration Nation community page, together with all of the other Enterprise Architecture resources. But for now, thank you, jeremiah. That's been incredibly illuminating for me. Personally, I'm going to have to listen to this transcript when I edit, with Google at hand, to go look up a whole bunch of other ideas that we've thrown off. And thank you, guy, for joining us also.

Speaker 2:

Thank you. Thank you both. Thank you for having me. I'm a loyal listener to the show. I learn a lot from the show and for those that do listen to this and you want to pursue any of these ideas or dig in more deeply, you can find us on the Integration Nation, our community, or reach out directly. This is a time where people who do want to make the leap into this advantaged future we're definitely creating a movement. It's a coalition and would love to talk to anybody that wants to dig in more deeply or share the work they're doing or frustrations or failures. You know this is a messy time and the more we help each other, the better the community can succeed together. Fantastic, thank you as is tradition.

Speaker 1:

Like and subscribe. Share with your friends. There is a transcript. I generate it naturally for those who prefer to read along, if you find that a more congenial share with your friends. There is a transcript I generated naturally for those who prefer to read along, if you find that a more congenial medium for you. But until then, we'll see you next time.