Previous  |  Next



Tim O'Reilly

Tim O'Reilly

Freebase Will Prove Addictive

Danny Hillis' latest venture, Metaweb, is about to unveil its first product, the aptly named freebase, tomorrow. While freebase is still VERY alpha, with much of the basic functionality barely working, the idea is HUGE. In many ways, freebase is the bridge between the bottom up vision of Web 2.0 collective intelligence and the more structured world of the semantic web.

Early visitors to the site might be inclined to dismiss it as an early candidate for the Techcrunch deadpool. After all, is there really room for what at first glance appears to be a bastard child of wikipedia and the Open Directory Project, another site that purports to collect and organize all the world's information in one place?

homepage banner from

But once you understand a bit about what metaweb is doing, you realize just how remarkable it is. Metaweb has slurped in the contents of several of the web's freely accessible databases, including much of wikipedia, and song tracks from musicbrainz. It then turns its users loose on not just adding more data items but making connections between them by filling out meta tags that categorize or otherwise connect the data items, using a typology that can be extended by users, wiki-style.

So, for example, I search for O'Reilly Media, and come to the following page:

O'Reilly Media page in Freebase

As you can see, freebase pulls in the wikipedia entry if available, and there's a user-supplied photo of O'Reilly's building in Sebastopol. But most importantly, O'Reilly Media is identified by its type: Company. And with that type designation comes a whole lot of structure, because Metaweb has defined a whole set of additional data that is typically associated with a company:

the company data type

Now, when I select the link "show empty fields", I see a new version of the entry, which prompts me to enter various structured information that I might know:

O'Reilly Media entry with empty fields shown

Now, I can fill in various data items that I know about with O'Reilly Media. For example, I've added the fact that O'Reilly was founded in Newton, MA, is currently headquartered in Sebastopol, California, and has some subsidiary companies.

O'Reilly Media freebase page, updated

And here's where the magic happens. Now, when I go to the "companies" page, the new companies I've listed now have entries of their own.

Metaweb companies list, showing new companies automatically added

Much as in wikipedia, any entry listing an entity not already known creates a new page. But these entries are structured -- if I add a person, they are of type person. If I add a location, they are of type location. If I add a company, they are of type company. And each of these things comes with certain relationships, and that allows other entries to be automatically updated.

When Danny first showed me this application a couple of months ago, there was a really powerful example of the way this ought to work. We were looking at the tracks for Brian Eno's album Bell Studies for the Clock of the Long Now. Danny noted that he was actually the composer of one of the tracks, 1st-14th January 07003, Hard Bells, Hillis Algorithm. Robert Cook, who was doing the demo, immediately added Danny as the composer. Then, when we went to Danny Hillis' page, we saw that he was now listed as a composer as well as a computer scientist.

Now that was a demo. In playing with the application since, I've found that some of these work in theory, but not yet in fact. For example, as I added myself as the CEO of O'Reilly Media, that should have also updated the entry for Tim O'Reilly (person), but it didn't. (I may not have created a new Tim O'Reilly rather than selecting the existing entry.) In theory, the entry for Sebastopol, California should have changed to reflect a new company located there, but it didn't.

In addition, two of the companies I added as O'Reilly subsidiaries aren't really subsidiaries, which suggests wholly-owned companies, but affiliates. Safari Books Online is a joint venture with Pearson, and O'Reilly AlphaTech Ventures is a partnership that we are a member of. I don't have the privileges to create a new field in a company record, "Affiliated companies," but presumably, if it takes off, Metaweb will promote community members to the level where they can make these kinds of edits.

But hopefully, this narrative will give you a sense of what Metaweb is reaching for: a wikipedia like system for building the semantic web. But unlike the W3C approach to the semantic web, which starts with controlled ontologies, Metaweb adopts a folksonomy approach, in which people can add new categories (much like tags), in a messy sprawl of potentially overlapping assertions.

Now, the really powerful thing about this is that all these categories, these data types and the web of fields that define them, provide new hooks for applications that will be able to extract meaning from the data. That's what makes Metaweb a kind of semantic web application.

If Metaweb gets this right, this bottom up approach will build new connections between data, new categories and ways of thinking. It will likely be messy and contradictory for a while, but as I told John Markoff for the story on Metaweb that he was preparing for the New York Times tonight, they are building new synapses for the global brain.

And if you know anything about our own intelligence, you realize that our own pattern recognition works much the same way. We arrange the world in categories. Many of our categories are wrong, or in conflict -- hence our many squabbles and even wars -- but collectively, we make sense of things.

It's name is appropriate for many reasons. Yes, it is a free database, it is addictive, and its name is overloaded with multiple meanings, just like so many things we try to make sense of. But we have the ability to disambiguate those meanings, and to take them both in, with the overtones and conflicts actually giving additional meaning.

Metaweb still has a long way to go, but it seems to me that they are pointing the way to a fascinating new chapter in the evolution of Web 2.0.

tags:   | comments: 45   | Sphere It


7 TrackBacks

TrackBack URL for this entry:

» Get Higher Baby... awww, FreeBASE! from Master of 500 Hats

according to Radar O'Reilly, Grandmaster Willie-Wil (aka Danny Hillis of Metaweb) is about to light up some serious freebase... maybe Google ( GoogleBase) folks should be worried? i've been waiting for the promise of this vision of the data-enhanced we... Read More

» Freebase: the Web 3.0 machine from Rough Type: Nicholas Carr's Blog

Artificial intelligence guru Danny Hillis has launched an early version of the first major Web 3.0 application. It's called Freebase, and its grandiose epistemological mission is right up there with those of Google and Wikipedia."We’re trying," Hillis ... Read More

» Freebase: free-wheelin' semantic web from David Van Couvering 's Blog

Tim O'Reilly blogs about freebase, an alpha product coming out of MetaWeb. If these guys are really doing what I think they're doing, this is a Big Thing. Read More

» What is Open? from Semantic Wave

Henry Story, Danny Ayers and Shelley Powers wrote some astute criticisms of Tim O'Reilly's write-up on freebase. Why do Tim O'Reilly, Cory Doctorow and others continue to mis-cast a means to agree on how to communicate as a centrally-controlled system?... Read More

» What would Guha do? from Bill de hÓra

New York Times: "The idea of a centralized database storing all of the world’s digital information is a fundamental shift away from today’s World Wide Web, which is akin to a library of linked digital documents stored separately on millions... Read More

» Freebasing the Sematic Web from Neomeme

The latest buzz on the Interwebs is Freebase. Tim O’Reilly gives a pretty good demo of the application, and the New York Times gives a decent overview for the layperson (complete with a picture of one of the founders in a stylish Creative Commons... Read More

» Web 3.0 ? from Nodalities

Having read the title, many of you are doubtless scrabbling desperately for your keyboard or mouse, intent on moving rapidly away from this latest vacuous piece of opportunistic marketing fluff, but please stay your hand a moment as there... Read More

Comments: 45

Tim O'Reilly   [03.08.07 11:38 PM]

In response to my comment about some of the generated links not working the way I expected, Robert Cook wrote in email:

"That Sebastopol not showing companies is by design -- we could set it up to show everything in it, but the UI would become unusable very quickly. The links are there, but they are not 'reciprocated' in the type system.

Likewise for person, although very quickly we'll have a way of showing all jobs a person has had in a controlled way."

Makes sense. Not all links should be visible. And many of them *could* be made visible later when it becomes clear how to do so. (For example, I imagine that city locations will eventually have a listing of associated companies as I expected -- it's kind of the "chamber of commerce" function. It's just not there in the UI yet.)

dave mcclure   [03.08.07 11:45 PM]

fascinating, absolutely fascinating.

methinks the googlebase folks should be a little worried tomorrow morning...

thanks for the post tim ;)

Kris Tuttle   [03.09.07 12:10 AM]

Finally! I remember a simple program called Whatsit? that did this on a single computer back in 1979. I miss it and have never been able to find anything quite like it. The ability to do this using the Web certainly offers some profound opportunities.

Other than risk of failure one worries that if they succeed they might be quickly acquired and the technology perhaps not as exploited as it should be.

Eoin Dubsky   [03.09.07 01:07 AM]

I like it, but I'm a little confused -- isn't it very similar to how works? That website is down today, but if I recall correctly, it does all of the stuff you're talking about above in a familiar MediaWiki way.

Chris Pirillo   [03.09.07 02:54 AM]

Tim, this is a spammer's dream come true - what's being done (proactively) to curb the eventual onslaught?

Stefano Bertolo   [03.09.07 03:46 AM]

Five years ago we had a working implementation of this very idea at a company I used to work for (minus the robust backend needed to handle massive amounts of instances).

The secret sauce seems to be to learn not to ask inane questions like "who's X uncle" just because X is a Y and you have some other Y's in the system whose uncles are recorded.

Also interesting to note that there is similar work underway to adapt the same process of knowledge refinement into formal knowledge inthe context of Wikis.

See for example:

When all of this will come to pass it will be a nice day.

Gwen Garrison   [03.09.07 05:03 AM]

I would like to try it. I go to the site and there's nothing but a help wanted ad. How can ordinary people add data, or is this just for friends like TimO and EstherD and JohnM? Maybe that's how they keep the spam out. Tim, Esther and John are going to be pretty busy, hope they pay them lots of money!

Richard Ziade   [03.09.07 05:57 AM]

How would this address the problem of faulty or inaccurate information? It looks like it would suffer the same ailments as Wikipedia, except now the questionable data is semantically connected.

Jonathan Bruce   [03.09.07 07:22 AM]

Tim - I've applied to participate in the beta, but I thought I'd ask my question here nonetheless. What is the public interface ? RestFul, SOAP/WebService based, a proprietary protocol ?

Owen   [03.09.07 07:41 AM]

I think this is fascinating. I want to participate. I need an invitation code. I don't have one.

My interest is not flippant. This lies directly in the path of some research I am currently engaged in - research about constructing a memi which will be both a personal dataspace and a digital shadow.

How can I get an invitation code, or can't I?

Owen   [03.09.07 07:44 AM]

Oops, apologies: I assumed that my web site would appear in my comments as a link. Since it doesn't I should maybe point out that information about the memi is available at

Otherwise my previous comment is more gnomic than I intended :)

Thejesh GN   [03.09.07 08:00 AM]

Very Good Idea. But the data they collect is not open. Even to see/browse any data you need to login. Thats sad. I hope they will enable OpenId in future.

Jean-Michel Decombe   [03.09.07 08:07 AM]

Very interesting. None of these ideas are new, but in a world that is always more aware of its connectedness with each day that passes, the execution and exposition of such ideas (in a way that makes sense to the largest audience) is more important than ever. I am looking forward to play with it to see if they are the ones who did it right. Questions would include: Where does the metadata reside and who controls it? How distributed is the whole system? How does it seamlessly intertwines with the Web as we know it? And so forth. Looks exciting nevertheless so congrats to them on their upcoming launch (always an exciting moment).

Kingsley Idehen   [03.09.07 08:34 AM]


Please see the Data Web (Web 3.0) project called DBpedia.

This project is about Open Data Access and a Flexible Data Model (RDF). It exposes Wikpedia as a true Database or Web Data Space.

Data needs to be unshackled and this is what the next frontier (Web 3.0) is about; Meshing rather than Mashing which is the only option when you don't have an Open Data Access principle used in conjunction with a flexible Data Model).

Also look at the Data Web based Start Pages in my blog posts re. Oscar Winners or Major League Baseball. It is all here and happening now :-)

Also look at:

  1. Hello Data Web - Take 3
  2. OpenID & Personal URIs

David Wood   [03.09.07 08:38 AM]

Is this the first 'folksonomodel'? A user-generated model for user-generated content. Very interesting...

Denny Vrandecic   [03.09.07 09:04 AM]

As Stefano points out, this seems similar to our own approach towards a Semantic Wikipedia, but it goes even beyond by pulling in data from even more sources. I am looking forward to try it out. In the meantime, you may want to check out

our demo semantic wiki.

On a different note: "But unlike the W3C approach to the semantic web, which starts with controlled ontologies..." Please, correct this error. As you can see in the Semantic MediaWiki project, we do *not* have a controlled ontology. Anyone can extend and change it. Saying OWL and RDF provide a controlled ontology is like claiming XML provides you with a controlled set of tags.

Henry Story   [03.09.07 09:09 AM]

Great to see new companies growing in this area!
As I mention in my comment on your article, the Semantic Web was designed to be grown in this way. The idea that it is a top down thing is a confusion that shows the originality behind the design of the Semantic Web as I argued in UFO's seen growing on the web.

Tim O'Reilly   [03.09.07 09:22 AM]

Denny, I'm sorry if I've misrepresented the Semantic web. But it does seem to me that there's a lot more focus on creating controlled vocabularies. I'm not talking about at the level of OWL and RDF, but what people are trying to do with them, the ontologies they are building with these tools.

Correct me if I'm wrong, but I haven't heard about other initiatives that allow wiki-style editing of the ontology itself, where anyone can modify an existing ontology. (Freebase isn't completely there, but clearly, that's a big part of their plan.)

Robert Cook   [03.09.07 10:37 AM]

These are great observations and critiques.

We're doing a limited Alpha right now because, as Tim rightly points out, there is still a lot of work to do. People who are initially invited will be able to invite others starting next week (and so on, gmail-style). Eventually we will have full public read access.

Also - We think the Semantic Wikipedia is great. We'd love to find a way to collaborate.

Denny Vrandecic   [03.09.07 10:43 AM]

Tim, that's exactly what our wiki is about. You are supposed to change the vocabulary in a wiki-way, just as well as the data itself. Need a new relation? Invent it. Figured out it's wrong? Rename. Want a new category of things? Make it.

But I also agree with you that a lot of Semantic Web research is working on high brow ontology engineering. But there is quite a movement inside the Semantic Web community towards bridging towards the Web 2.0 world. Actually at the WWW conference, I'll have a paper on this topic, The Two Cultures -- I hope C.P. Snow will forgive me. You're invited to come by and discuss this.

Semantic Web and Web 2.0 -- hey, it's one web after all.

Ryan Elisei   [03.09.07 10:49 AM]

The organizational aspects of the Freebase initiative seem very useful...

HOWEVER, by using a human to form concrete, atomic "types" (e.g. 'Company'), and "connections", it cannot, by definition, be intelligent. It will therefore still have a human in the limiting factor of its learning ability. All the talk about computers sharing information without humans being involved is still just hearsay.

In high-level sentient learning, sensory input (and accordingly co-occurrence) are what form "connections" between different ideas. This is also aided by multiple-sensory-validation (or co-occurrence between multiple senses). As pointed out above, these ideas cannot be atomic or arbitrarily specified (as thoughtfully specified as they may be) by humans. Now tell me that they combine their APIs with Jeff Hawkins' HTMs and have a reptilian center firing mandatory sensory reaction, and THEN we'll see something like computers sharing information, but not before.

James Hendler   [03.09.07 11:18 AM]

I'm glad you're beginning to grok what the Semantic Web is about, but I must take issue with your claim that "unlike the W3C approach to the semantic web, which starts with controlled ontologies, Metaweb adopts a folksonomy approach, in which people can add new categories (much like tags), in a messy sprawl of potentially overlapping assertions." If you look st what I've been writing since 2001 (in the Semantic Web article in Scientific American, coauthored w/Tim Berners-Lee and Ora Lassila) through my recent posts on the "Dark side of the Semantic Web" - I, and many others, have not been arguing for controlled ontologies - rather, we designed the Semantic Web technologies, and especially OWL, to encourage linking and reuse. We do believe there will be some carefully controlled ontologies in high value areas (such as the Cancer ontology which the national cancer institute maintains) but that much use would be by extension and linking to these - and if you'd ever attended one of my talks or Tim's on the Semantic Web, you'd see slides where we show exactly overlapping vocabularies that grow and link. In fact, I was the primary author of the FAQ for the OWL langauge ( and you might look at the section on how OWL compares to earlier languages - we emphasize the WEB nature of RDF and OWL, and point out that open and extensible are two of the key design factors.

With due respect, I think you, and even more aggresiously Clay Shirkey, have been misrepresenting what the Semantic Web is, and critiquing based on that misunderstanding, not on the reality. Folks like Danny Hillis and Nova Spivack who were listening got it, and it is very gratifying to see MetaWeb, RadarNetworks, Joost and others embracing the Semantic Web and its technologies. We've said all along the Sem Web is open, Web friendly, and that openness and linking were the key, just as they are on the text Web. Now that the vision is coming into focus, please give us our due - we welcome this new and exciting work, but it is what we've been advocating for years, not a contradiction to it.
-Jim Hendler

Morten Frederiksen   [03.09.07 02:18 PM]

"If I add a location, they are of type location. If I add a company, they are of type company. And each of these things comes with certain relationships, and that allows other entries to be automatically updated."

That sounds like a "controlled ontology", not exactly a folksonomy?

You Mon Tsang   [03.09.07 03:35 PM]

Hey Tim:

I've been working on a contributing piece for VentureBeat thinking about this type of thing. In my case, I was thinking about how Wikipedia can be the brains of a software app, in much the same lines that the LAMP stack is its heart.

Take a look:

Nono   [03.09.07 04:45 PM]

Wikipedia with a strictly templated structure. Big deal?

Jeff LaPorte   [03.09.07 04:50 PM]

Great post Tim.

I believe there are some intriguing connections between the Freebase model and Creative Commons licensing - together they are greater than the sum of their parts.

See my post at:

- Jeff LaPorte

bob phillips   [03.09.07 05:57 PM]

I've been tracking reports of bird flu in several different languages and building up a glossary and list of reference sources by hand using mostly google. Something like the freebase tool would seem interesting for a group of people (several different flu forums/wikii working on a shared project like discovery and collection of information on disease.

I sent my email address to them indicating interest. Next time you are in touch with them, encourage them to add a suggestions box to their rather spare web presence.

Patrick Tufts   [03.09.07 08:40 PM]

Tim, thanks for the writeup of Freebase. And for anyone reading this at SXSW, I'm at the conference through Mar 13. Track me down if you're interested in what we're doing, and want to help out.

Mark Szpakowski   [03.10.07 08:17 AM]

Nice to haves:

  • OpenID login (argh! Not another username/pw!)

  • Safari support

Re the taxonomy/folksonomies debate, in the so-called real world we do both. Topic/subject mapping technologies can provide resolution of those as needed - ie, here's a mapping of my personal contextual vocabulary to that one over there...

Dave Evans   [03.11.07 07:40 PM]

We batted this idea around 5-6 years ago, the God database. Kind of like a mashup of Wikipedia and Dunn & Bradstreet, as a starting point. You need a core D&B style database to get the company stuff going, and I agree with Chris, the spammer issue is going to be problematic. Perhaps you need authentication to add/vet data, but access is free for personal use or paid for companies and services.

I certainly hope the API is a dead-simple GUI, like Yahoo Pipes, which is the perfect interface for this in the early days.

Sean Ness   [03.12.07 01:38 PM]

FYI...Metaweb will have their first public unveiling at the March 21st STIRR event in Palo Alto -

Stewart Brand   [03.16.07 03:48 PM]

Thanks for the walk-through, Tim. As there becomes more to cruise, I'll hope you'll take us on some more strolls through the content and features.

Nil Burns   [04.02.07 02:53 AM]

Hello All,

I really hope to get a respone from Tim or any other of you guys who already use the freebase service/software/tool (just choose :-)). In the end of the day, the only and big question is: if I just wrote a document without any tags (or freebase tags) will the data be also constructed automatically ? or will freebase be unable to handle it?

Another question (minor one): Will the freebase tags be a standard ?



Robert Cook   [04.02.07 01:20 PM]


Structure in Metaweb is explicitly added by people. Automatic extraction of meaning from natural language text is a very difficult problem that is being worked on by many groups in both academia and industry. Our hope is that they'll use our API to help with semi-automatically creating structure from documents.

Metaweb doesn't have 'tags' exactly. Instead there are 'topics' which have more meaning than simple words. See my blog posting on topics.

By providing an open API that anybody can use and data with an open license, we hope that topics can become a standard. Already some groups have expressed interest in using Metaweb topics for applications based on tagging.

Mark Birbeck   [04.04.07 02:58 AM]

I have to agree with Jim and others; the Semantic Web is already by definition distributed and sprawling--a key tenet is that "anyone can say anything about anything".

Let's say for example that the pharmaceutical industry decides to create a complex ontology to define chemicals; there's no point in counterposing that to 'folksonomies', and saying that it's further proof of a 'high-brow' semantic web disconnected from the rest of us. The point is we need both evolving ontologies and we need ones that are created by experts in a field.

And as well as the data, we also need the 'glue' that makes this data available in different ways. I believe one of the most important developments that is being worked on at the W3C is RDFa, a way of using a small number of attributes to add simple or complex metadata to XHTML documents. RDFa is being worked on jointly by people from the HTML and RDF communities. (See my An Introduction to RDFa, Bob DuCharme's Introducing RDFa and the RDFa Primer, for background.)

No disrespect to Tim but I think what often happens is that people look for the one single thing that is somehow going to change the landscape, when the reality is that technology develops as a combination of both evolutionary and revolutionary steps. The same idea may be revisited many times over the years before it finally becomes a reality. It's as true in mathematics, biology, physics, engineering, etc., as it is in software development.

So Freebase is great, and a nice idea. But it's certainly not going to 'bring about' the semantic web. For that we need a combination of many different things (all of which do seem to be progressing together quite nicely, though).




Mark Birbeck, formsPlayer |

standards. innovation.

Tim O'Reilly   [04.04.07 09:40 AM]

Mark --

I'm struggling a bit to express a difference that I see but can't entirely put my finger on. When I say "bottom up" and "folksonomy", I'm not referring just to who defines the semantics, and how distributed the process is, but to whether the semantic structure is implicit or explicit.

PageRank is a great example. No one said, "Ooh, let's get people to express the value of a link for search results." Site1 links to Site2 is a kind of natural triple, whose additional meaning was inferred, or overloaded, by Larry's pagerank insight.

It seems to me that Web 2.0 is based on this kind of inferential structure of meaning, whereas the semantic web as advertised is based on explicit structure. is a great example of how a Web 2.0 site can lead people to create more "natural triples" without meaning to. People are linked by their common bookmarks -- a new synapse is born.

Metaweb strikes me as an evolution because it's a kind of next gen hybrid between semweb and web 2.0. See my followup post Different Approaches to the Semantic Web.

Mark Birbeck   [04.04.07 02:22 PM]


Ok...I see what you mean now. In that sense Freebase may or may not be the missing link, but whether it is or not, I certainly agree with you that there's a ton of metadata out there that we could 'infer' and get for free, if we put our minds to it. After all, a 5 year old child can tell that the people who visit our house for dinner should be in the 'friends' section of your address book. They could even work out that the people whose profiles appear on the same page as ours in the departmental intranet are probably colleagues...and so on.

I've been thinking about this kind of thing a lot over the last few years, and part of that work has been channelled into RDFa, which I mentioned before; this allows you to place metadata into XHTML pages, which means that we can start to do some of the things we're discussing here. For example, RDFa means that it's easy for a blogger to add small bits of information about themselves at exactly the point where they referring to it anyway--in some article or in their profile. The tools are not there yet to make it completely easy, but the architecture is.

But another space I've been looking at concerns the 'emergent' or 'inferred' aspect that you are describing, and in particular I've been looking at the seemingly narrow space of the information we provide when we interact with software. For example, in GMail I can click an option to get GMail to always show images in any email that I receive from O'Reilly mailing-lists. Given what you have said so far, I'm sure you'll agree with me that this seemingly simple act--one mouse-click for most of us--tells us a great deal about the relationship between me and the person or company sending the email. (As does the even simpler act of not clicking the link.)

But the frustrating problem is that you can't get to that metadata. The only place that 'knows' that I trust O'Reilly mailing-lists is GMail, and it does nothing with it. (At least nothing that we know of. :)) At the moment it is not possible to make use of this information in some other piece of software, for some other use. As it happens, my company is working on a software platform that aims to allow us to 'capture' those kinds of pieces of information, storing them so that other 'applications' can make use of them, but as you can imagine it's a lot of work.

Anyway, I do now see what you are getting at in relation to Freebase and in your follow-up, and although I can't help leaping to defend the 'big picture' :) I certainly acknowledge that a crucial part of building a (the?) semantic web is to be able to harness the enormous number of 'facts' that emerge organically, as much as it is to harness the 'facts' from enormous data stores.




Mark Birbeck, formsPlayer |

standards. innovation.

Tim O'Reilly   [04.04.07 02:29 PM]

Yes! excellent!

Cecilia Abadie   [04.05.07 10:10 PM]

I like Tim's comment: "Metaweb strikes me as an evolution because it's a kind of next gen hybrid between semweb and web 2.0.".
The way I see the web evolution it went static web, dynamic web, social web, relational web (where we are right now) and semantic web (where we're going in the future). After that I predict the end of the web as we know it as it reaches mostly interface level protocol limitations of the times to come.

Michael Lang   [04.11.07 06:33 PM]

Tim, there is a web site, that is a wiki based OWL editor where anyone can create a community and develop an OWL based knowledgebase. The community members edit the knowledgebase like in any wiki. This is a "structured' approach to to developing the semantics for a space, but it is also bottom up.

As Hendler points out, since OWL and RDF are naturally distributed technologies, the community ybases knowledgebases can be mashed up to create larger knowledge bases and search spaces.


STEVE OKEEFE   [04.24.07 10:03 AM]


We visited this post today on our daily "coolhunt" program where we look for people who are using Collaborative Innovation Networks (COINs) to spot or develop new trends.

Our coolhunters made the following observations:

- We discussed Freebase, MetaWeb, and Danny Hillis' work in general.

- We questioned whether the centralization of knowledge would prove as good of an organizing method as the decentralization of knowledge.

You can see the full log for today's coolhunt at the Swarm Creativity Blog and add your own comments if you like.

Thanks for providing such interesting food for thought.

Steve O'Keefe, Coolhunt Moderator
Scott Cooper
Peter Gloor

Anjan Bacchu   [04.27.07 04:36 PM]

Robert Cook,

can you send me an invite -- I'm at ANJAN.DEV AT NOSPAM gmail DOT COM.

Thank you,


Mark   [05.10.07 12:53 AM]

I have a Freebase account now and am really excited about its potential. The debate about ownership of data is certainly going to heat up with services like this one, as people contribute data that might be derived from other sources. A friend of mine operates a vertical search engine in property and if they scrape data from estate agents to submit it to Freebase, that will raise some very interesting questions.

vaspers the grate aka steven e. streight   [05.28.07 07:53 AM]

Am experimenting with Spock, folkd,, Twitter, Jaiku, and Freebase.

What I see in Freebase is a database/folksonomy driven Web 3.0 app.

Multi channel messaging, by geeks or companies, is now mandatory.

It's mediocre slacker to just have a web site, or just have a blog.

vaspers the grate aka steven e. streight   [05.28.07 07:55 AM]

er...looks like you need a spam filter.

I recommend Comment Moderation w/delayed posting, and email notification of new comments.

Hire an intern to keep your comments free from spam and trolls.


Subscribe to this Site

Radar RSS feed