Previously, I outlined my general understanding of FBS’s Spark Platform and the legal framework it creates. Now, as promised, I’m considering the technology framework. I should warn all readers, however, that I’m a lawyer, not a technologist, so I’m counting very heavily on tech-savvy folks to correct me in the comments where I go astray. It seems to me the big questions are (a) how data gets into Spark; (b) how it gets out for appropriate uses.
As I understand it, MLSs taking part in the Spark Platform will provide RETS feeds of listings data to FBS. (An exception may be the many participating MLSs who are already FBS customers; I expect there might be more efficient back-end approaches for FBS to copy and synchronize their listing data onto the Spark Platform.) The Spark Platform will, to the extent possible, map listings data from the MLSs’ native formats onto RESO’s data dictionary, making it as standard as it can be on a national level. The RESO data dictionary is a fairly new piece of standardization. (My colleague Mitch Skinner—@MitchellSkinner—has been working with RESO on the intellectual property framework and license agreements for the newly launched data dictionary this year.) The RESO dictionary is not meant, though, to cover every data field that might be important in a given MLS. I asked FBS CEO Michael Wurzer who will you handle local “oddities”—e.g., “dune grass” in western Washington State? He responded:
These will be available through the API as custom fields. As described here, data standardization through the platform is a process, which will improve over time as more applications are developed and need more standardized data and the dictionary broadens and improves.
But Spark seeks to go further than that; it would also extract and maintain contact information provided by agents. Thus, if FBS is your MLS vendor (FlexMLS system) and you put buyer prospects into the MLS system, I anticipate contact info for those prospects would be usable to you in apps you obtain through the Spark Platform. I also understand that FBS’s goal is that the searches you save in MLS would be available in these other apps, too. I wondered whether contact information and saved searches collected by an agent and put into one Spark-enabled app will be available to the agent in other Spark-enabled apps and even through the agent’s MLS. I asked FBS to confirm/elaborate on these matters, and here’s Michael’s response:
[T]his is the goal, but, importantly, standardization of searches is a process that’s going to take time similar to the standardization of fields. Some searches may be converted to the new standard fields and search format but others may not be able to be converted. Most importantly, however, new searches created by applications using the API, such as Flexmls Mobile, will be in a standard format and available to all applications through the API. Ending the days of user data being locked in properitary formats that cannot be accessed by other applications is a main goal of the Platform.
There is no standard comparable to RETS or the RESO data dictionary yet for these other types of data; though our friend Matt Cohen has long been urging the development of such standards for a variety of reasons.
Another kind of data comes from the MLS. The MLS indicates on the Spark Platform what data fields and statuses are confidential (i.e., cannot be displayed on the Internet), what data fields are required on IDX displays, what data fields are permitted in VOWs but not permitted in IDX, etc. This “metadata” is not data about listings and individuals, but data about how that data can be used. Incorporating these business rules into the Spark Platform is a critical step to create efficiency in development of products based on data from many different MLSs.
As for getting data out of Spark, that requires use of an API, or application programming interface. An API is basically a framework for two computer programs (or a database and a program, or two databases… you get the idea) to talk to each other. The API provides all the data and metadata I described above in one consistent fashion for all the MLSs involved in Spark to each product or service provider (PoSP)/developer in the Spark network. The data a PoSP gets will depend on what kind of application it is offering. (Less data for IDX than for VOW services, etc.) Documentation for the API is available now online; I confess that I don’t understand most of it, and I’m counting on more technical readers of this blog to comment on its sufficiency and practical utility.
Note that this technology framework also implements the legal framework I discussed in the last post. Spark is not the first to do this. RETS IQ has implemented an automated contracting framework for use with its custom RETS server projects in at least one of our client MLSs. (Mitch helped develop the contracts for that, too.)
I mentioned there may be some difference between Spark’s integration with FBS systems and its integration with non-FBS systems. Brian Boero also asked whether MLS vendors other than FBS would take part. As I understand it, FBS is working with several MLSs and their vendors to determine the best way to integrate into the MLS system the Spark Store and the applications purchased. The idea is that adoption will be improved by deeper integration into the MLS system, and so developing standards for this integration is critical as well. It seems to me that FBS is very open to working with the other system vendors on this; I don’t know how much they will want to cooperate.
Well, that’s a rudimentary understanding of the technology framework. Again, I hope that folks who understand the technology better will speak up with comments and clarifications. I know Mike Wurzer from FBS sometimes reads this blog, so if you direct technical questions his way in the comments, you can probably expect a pretty prompt answer.
Next time, we’ll consider what I think are the significant obstacles to the adoption of the Spark Platform (or similar concepts). See you then!