Previous  |  Next


Oct 18

Tim O'Reilly

Tim O'Reilly

Web2Summit: Radar Networks Unwinds

As part of the Semantic Edge panel tomorrow at the Web 2.0 Summit, Nova Spivack of Radar Networks plans to unveil the first application built on their semantic web platform, twine, a new kind of personal and group information manager. I've only seen a demo, and haven't had a chance to play with it hands-on or load in my own documents, but if it delivers what Nova promises, it could be revolutionary.

Underlying twine is Radar's semantic engine, trained to do what is called entity extraction from documents. Put in plain language, the semantic engine auto-tags each document, turning each entity into what looks like a web link as well as a tag in the sidebar. Type a note in twine, and it picks out all of the people, places, companies, books, and other types of information contained in the note, separating them out by type. For example, here is an email dropped into twine:

entity extraction in twine

(Note: screen shots shown in this entry were taken during a demo of a twine alpha delivered over the web via Ignore the frame when looking at the images. In addition, some features may have changed since the demo.)

OK. So what, you say? The magic doesn't happen until you -- or a group of people -- have collected a large set of documents. Now, you can use the tags associated with any given document to pivot through everything else your collection, or twine, contains about that tag.

So, for example, our team here at the O'Reilly Radar might create a twine on a topic we're interested in, say sensors, and any time we met with a company doing interesting work with sensors, we'd add our notes into that twine, along with associated email, address book entries, images, web links, and so on. This pile of documents would have relevant tags extracted, which would then allow us to navigate through it. In addition, twine pulls in additional information: give it a bookmark, and it brings in a thumbnail of the web site; give it the name of the book, it brings in a cover and associated metadata from Amazon; and so on.

The key point is that because each entity in any of the documents becomes a meaningful tag, that extracted meaning becomes a semantic layer tying all of the documents together. What's more, twine has its own built-in semantic taxonomy, based on concepts mined from wikipedia, and Nova claims it can make connections between documents using tags and concepts that are not actually in the documents themselves.

You can then explore the various data that's been added into the twine by the people sharing it. As Nova said in some notes he sent me by email, " It's like a social network for sharing, organizing and finding knowledge....Twine helps people organize, share and find their information more intelligently. Twine learns from users and from folksonomies in order to help them better over time."

Technorati Tags: , , , , ,

contents of a twine

As Nova mentioned, twine also has elements of social networking: you can share any item, or a whole twine (a group of related items) with other people. And there is the by-now-obligatory news feed:

Knowledge management is certainly a thorny problem. We all have vast collections of data, usually in various silos: our email, our bookmarks, our flickr photos, our address book. Navigating among related items is hard. And when you have a group of people working on a shared project, it becomes even harder. Who knows what? Where is it? This is the knot that Radar Networks hopes to untangle.

Here are some notes on the process of how Twine works from a briefing document Nova sent me:

1. Create Twines
o Create a Twine for yourself or for any group, team or community. You get a private personal one by default. It takes seconds to create one for any group.
o "A Twine" is a place for your knowledge. It's the next step beyond a file server, wiki, personal home page, or database.
o Twines can be private and personal, private for groups, or for public groups and communities.
o All roles and permissions can be controlled by the creator, and they support moderation.
2. Semantic Social Networking
o Users can form opt-in relationships with individuals and groups to permit communication and sharing in Twine. This is similar to Facebook, except that user-profiles and social relationships are semantically defined and extensible, and accessible via open-standards (RDF and SPARQL). We plan to enable users to extend these as well.
3. Bring in all your stuff
o Pull in all your information from anywhere your Twine(s).
o You can pull information in live off the Web as you surf, or via import, or via API's
o Emails, bookmarks, RSS/Atom, documents, photos, videos, amazon, contact records, data records, twitter, Digg, other applications, etc.).
o It's extensible and open -- the community can add support for new data sources, or even custom data sources.
o Once your information comes in we turn it into semantic web content (objects of various kinds according to our ontology, and/or other ontologies that can be loaded in, or that we users can actually create themselves).
o Twine will eventually synchronize with all your stuff, across all locations and devices. We're starting to work on that -- there's plenty to do there.
4. Author and Edit Rich Extensible Open Content
o Twine enables users to author information much like they do in wikis, blogs or databases. There are many types of information that can be added (people, contact records, notes, organizations, etc.)
o Twine supports collaborative editing, like a wiki, according to permissions, down to the object/page level.
o In the near future we will start enabling users to define their own types of content as well (like Freebase, but a different approach).
5. Enrichment and Learning
o Mining. Twine mines content to detect metadata and tags and other descriptive information that may already be present.
o Semantic Tagging. Twine runs NLP [Natural Language Processing] on the content to detect entities like people, places, organizations, products, events, concepts, etc. These become semantic tags. Tags in twine are semantic objects that link to other broader, narrower or related tags, and can also link to ontological definitions.
o User-Tagging. Users also participate by helping to tag the data too -- but they don't have to do as much work. We find 80% of the tags for them automatically and suggest others.
o Auto-Classification. Twine automatically categorizes content, based on machine learning against the wikipeida. There are approx. 300,000 community-generated taxonomic categories in the wikipedia. This is an example of the next generation of collective intelligence -- software learning from the collective intelligence and behavior of millions of people.
o Learning. Twine is designed to learn from the community of users in aggregate. As they use Twine, the service begins to learn new concepts and tags, discover new connections and form new relationships automatically.

It's difficult to see, based on the demo, whether or not they've succeeded at this ambitious undertaking. I think a lot of this is still the vision, with many features not yet available in the alpha to be unveiled tomorrow.

But more than that, I'm going to withhold judgment till I can get my hands on the service. Until the system is populated with a lot of data -- far more than shows up in the demos -- we won't know whether we've spun a smooth twine, or a gnarly knot. But I'll look forward to trying. I'm seeing a number of startups trying to work this same problem. None has yet gone live. But I'm confident that eventually someone will make some headway, and I'll be excited if twine gets there first.

Oh, and one more thing. Nova says "Data that put in Twine will be fully exportable. We don't want to lock anyone in. The freer we can make your data the better for everyone. The fact that Twine turns all your data into RDF means you can reuse it in anything in the future." Note the tense of that promise, though: "will be," not "is." Still, sounds like it will be well worth a close look.

tags: Conference: Web 2.0 Summit, radar, semanticweb, twine, web2summit, web_2.0  | comments: 24   | Sphere It


0 TrackBacks

TrackBack URL for this entry:

Searchâ—Š Engines Web   [10.19.07 01:28 AM]

Can high level concept search be incorporated into it?

Stefano Bertolo   [10.19.07 02:05 AM]

A large part of this effort will be establishing when two contributors are referring to the same entity ("Michael Jordan": the basketball player or the UCBerkeley professor?) and being able to refer to anonymous entities (the meeting you and I had three weeks ago; the location of a car accident, ...)

All of this will be greatly facilitated by systems such as

that provide shareable entity identifiers.

Michael Sparks   [10.19.07 02:31 AM]

This sounds very similar to the work being done at the National Centre for Text Mining in the UK. (We're working with them for something similar with articles from BBC News). Specifically, twine sounds somewhat similar to the Termine & Cheshire systems they've developed.

One shot example:

That's and example of the termine system (personally I find it pretty impressive by itself, let alone what happens when you allow it to discover clusters and redo searches based on that). The Cheshire Termine system gets to be much more useful when you have a large amount of documents (ie many thousands). At the moment we're sending them as many news articles as we can to explore what this sort of technology can provide.

Vladislav Chernyshov   [10.19.07 07:04 AM]

Yahoo!!! Well done Nova! Luck you and all Radar Networks guys!

Joe Duck   [10.19.07 11:10 AM]

Tim -
... if it delivers what Nova promises, it could be revolutionary

OK, but it seems I've heard that before about a lot of new online efforts. If Powerset delivers people better sell their 600+ GOOG shares fast.

My first reaction to this was WOW! because I assumed "Radar Networks" and you (Tim O) were involved. But unfortunately this is not the case.

Your posts have historically been right on the mark about how social networking, search, and the global data maelstrom must become far more accessible, more open, and more integrated.

I'm also waiting to test Twine but my quick reaction is that this is NOT the Web 3.0 they claim to be. I'd say web 3.0 is where I don't have to do much of anything to make my stuff available to others and make it nicely organized. Also the ap must allow me to access anything you want me to see and anything online that I want to see and find the stuff I didn't know I wanted to see based on my past input to the system.

Web 4.0 will be simpler because by then we'll hopefully have integrated our brains with the systems so no written or verbal integrations will be needed. circa 2025?

richirich   [10.19.07 12:25 PM]

i heard that twine was US DoD/IC government funded technology? Do they have access to the DB? how can anyone trust Radar?

Jeffrey Carr   [10.19.07 04:50 PM]

I really don't see anything unique in what Twine has released so far. ClearForest, for example, has offered a Firefox add-on that does the same entity extraction for any web page that your Twine screenshot illustrates, and they had that available several months ago. It seems like Twine is just a mash-up of social networking with some Semantic Web components. I'm also curious about what it is that they think is patentable (Spivak mentioned that Twine is patent-pending at his blog); i.e., that prior Art cannot be found for.

Tim O'Reilly   [10.19.07 09:43 PM]

Joe --

Good catch re the name. I should have made clear that there's absolutely no connection between Radar Networks and the O'Reilly Network.

And yes, in some ways, Twine looks like it's just 2.0 -- and that would be neat, but not revolutionary. But I do think that there's something going on in the extraction of a semantic layer. The semantic layer is always there, but it's latent. Tools like twine and powerset and freebase are making it explicit. And that will enable a new class of applications.

The video from our panel at Web 2.0 Summit should be up in a couple days.

Tim O'Reilly   [10.19.07 10:24 PM]

richirich --

I sent your comment to Nova, and he replied:

"The short answer is that several years ago we were part of SRI's CALO project, which was funded by darpa - to invent next generation tools for knowledge workers. We were one group among 400 researchers in 30 institutions including many of the leading universities. We provided a triplestore and helped with the ontology. All the work we did was open-sourced under LGPL and anyone can see it and use it. It is available in the IRIS codebase at SRI - a smart personal information manager for the desktop. We were just a tiny part of that project and the codebase we use today is completely different and not related to that project. Whereas IRIS was about exploring a future semantic desktop, Twine is about the semantic web - it is a totally different approach. We were honored to be part of the CALO project - one of the most ambitious academic research initiatives in artificial intelligence in years - but our present work is totally different. In short we learned a lot, we were able to be a part of an intersting project, and it helped us understand the problem space better. But Twine is in no way funded by or affiliated with any government agency, and our position on privacy is very clear and we're believe in the individual's right to privacy. There should be no doubt about this, if anyone is seriously concerned feel free to contact me and we can discuss this further. Twine a completely independent project."

Nova Spivack   [10.19.07 10:30 PM]

First, in answer to the question about any potential relationship between Radar and DOD. Actually Radar Networks was never involved with DoD. In fact, the actual history is that several years ago we were invited to be part of SRI's CALO project -- which was funded by DARPA - to invent next generation tools for knowledge workers. We were one small group among approximately 400 researchers in around 30 institutions including many of the leading universities in the US. CALO is a "who's who in AI" -- involving many of the leading academics in the nation. Radar Network's role was very small -- we provided a desktop disk-level RDF triplestore and helped with the ontology and user interface work. All the work we did was open-sourced under LGPL and anyone can see it and use it. You can see all the code here:

IRIS is a smart personal information manager for the desktop. Today IRIS may use only very little of the original codebase and I'm not sure how much, if any of our work, survives in the codebase today. It has been many years since we looked at it. You are welcome to look at that codebase and decide for yourself. As it is LGPL it is freely available and open to scrutiny.

In any case, we were just a tiny part of that project. We later raised venture funding from Paul Allen, and we began development of a completely different codebase. Instead of being oriented towards the PC desktop, like IRIS, we focused on building an "Internet-scale" server architecture and platform for the Semantic Web. The codebase we use today is completely different and not related to CALO or IRIS. Whereas IRIS was about exploring a future semantic desktop, Twine is about the semantic web - it is a totally different approach with completely different goals and constraints.

We are in no way affiliated with CALO or SRI, although we remain very good friends with our colleagues from that project -- including some of the best ontologists and AI thinkers in the world.

We were honored to be part of the CALO project - one of the most ambitious academic research initiatives in artificial intelligence in years - and it's wonderful that there is support for big ambitious breakthroughs from our government. After all many of the technologies we now take for granted as the Internet of today were originally funded by DARPA. Similarly, much of the core technologies of the Semantic Web stem from DARPA research, such as DAML + OIL.

But it should be extremely clear that Radar Networks' codebase and product is totally separate from that past research. Neither Radar Netorks nor Twine is in any way funded by or affiliated with any government agency or program, and our position on privacy is very clear: we believe in the individual's right to privacy. We also believe in open standards, and open-source. All of our code has been implemented to be potentially open-sourced in the future and while we haven't decided if that makes sense to do at this time, there is nothing in our code that we are ashamed of or that would compromise anyone's security or privacy deliberately. In fact many of the coders here at Radar Networks are hard-core open-source folks, like Peter Royal of the Apache Foundation, and Jim Wissner, our chief architect who is extremely in favor of individual privacy. My own personal views on this subject are equally strong. I am a passionate believer in democracy and the rights and protections granted to individuals by the US Constitution. I am very concerned about the present trend towards eroding those rights and as long as I have any say in the matter, I can promise that my company will do its utmost to protect privacy and the Constitution of the United State. Hopefully this has alleviated any doubts about this issue. If not, feel free to contact me and we can discuss this further until you are completely satisfied.

Nova Spivack   [10.19.07 10:38 PM]

Regarding the questions about what we have at Radar Networks that is "novel" and "innovative." There is a lot of new work and intellectual property here. Most of it is related to the Semantic Web platform, the way we do machine learning, the way we enable knowledge management, new ways to improve search results, new ways to do personalization, based on uses of the "semantic graph" we are building. We are not claiming to own the idea of natural language processing (NLP), nor have we patented that. Instead we apply a wide variety of methods including NLP, statistics, graph theory, machine learning, ontologies, in an innovative combination, along with human input and folksonomies. Ultimately we are about open standards, open API's, and even open-sourcing. This will all be revealed over time. We have been working since 2001 on this. We have a very large amount of code and IP, and over time we will unfold this and release more and more of what we have been working on so that others can play with it. I hope that in the future, when the codebase has evolved further, we can even release an open-source stand-alone version. We've done some experiments around that and we have some interesting ideas. I can't promise that yet because we are not sure it is key to our business model to do it yet -- in fact it may just be simpler to focus on being a SaaS (software-as-a-service) business and open the platform via open API's (which we have and are working on). That is a decision we will make in the future. Regardless of that, for now our mission is very clear: Make Twine an amazing and delightful service for individuals and groups, and then enable outside developers and services to integrate with Twine in numerous ways. For this first phase our focus is really on learning and entering into a conversation with users and developers. I am sure there will many wonderful discoveries and synchronicities along the way, and I guarantee that we cannot imagine them yet. This is a great adventure and we are very excited to be on it, and I look forward to feedback from all of you as we explore this new frontier together.

Jeffrey   [10.20.07 12:29 PM]

Nova, thanks for your reply, although I'm still skeptical that your group of researchers has truly arrived at something that will survive a patent dispute in court. In all honesty, I'm dealing with the same issues in my own product development in the area of Intelligence and Security Informatics. We, too, have what we believe is a unique way of combining pre-existing as well as fresh research in computer linguistics, but I'm wondering if that's enough to secure a patent, and in fact, if there isn't a way to bypass the patent process altogether. If there were a Creative Commons type of security for our work instead of the USPTO, and if the road map for monetizing open source software were a little more clearly laid out, I'd jump on that in a minute. In the meantime, I'm watching your company unfold with interest, and am taking notes.

Nova Spivack   [10.20.07 01:17 PM]

Regarding our potential future patents, there is a lot of novel IP in our work. It's not a matter of merely combining existing ideas in a new way. We've invented a lot of new stuff especially in the last 2 years, and even quite recently. But who knows what the USPTO will do. I wouldn't deign to predict that. I've seen a lot of patents granted that have absolutely no merit and don't deserve to exist. I've also seen denials of very novel ideas on completely misguided grounds based on examiners not really understanding the domains well enough. There have also been many very legitimate, well-deserved patents. It's a bit of a chaotic system it seems.

The patent office examiners are overloaded and the entire field of software patent law needs an overhaul. There are too many old patents lying around that nobody is using, making it hard to do anything new without risk of stepping on a landmine. At the same time, the standard for what is patentable is too fuzzy in some cases, and so people try to patent things that just don't deserve it. There are too many patents being filed and granted.

One solution to the problem would be to make all (past, present and future) software patents have a shorter lifetime -- say 4 years max. That would relieve the backlog on the PTO, and would also prevent barriers to progress, competition from innovation that results from dormant patent land-mines lying around. First of all only serious players would bother to apply for a patent if they could only get 4 years of protection. Secondly, if one got a patent, but failed to do anything with it within 4 years, then they would not be able to sit on it and just wait for someone else to sue in the future.

A four-year timeframe would be fair because if an invention is not commercialized and monetized by its inventor within 4 years from being granted a patent in the software space (a long time in our field), then it should be opened up to others. That would really reduce the number of lawsuits (spurious or otherwise), and enable the software industry to flourish.

Software is not like mechanical devices -- it evolves much faster. Our present patent system wasn't designed for software -- it just can't keep up with the pace of software innovation -- it was designed for an earlier, more physical-mechanical world. The friction between the pace of the software industry and the intellectual-property process we currently have for it, is actually holding back our emerging new economy. It increases costs and risks for everyone, and reduces the spread and reuse of new ideas. Another area where I feel very concerned is genomics patents and patents around drugs -- these are for all intents and purposes very similar to software patents -- and it's very worrisome to see what is going on in that space. When fundamental operations on building blocks of the human, plant and animal kingdom become patentable "inventions" it just seems wrong.

In any case, regardless of my opinions above, I do believe that companies and individuals who invest time and money in inventing things -- software or otherwise -- should be granted some protection for a limited period of time so that they can have the first shot at monetizing their investment. That's only fair. Without this protection at least some amount of incentive and investment in early-stage ideas would dry up, and that would hurt innovation and progress just as much as having too much protection.

In any case, until the patent system for software changes (for everyone, equally), then I and my company have to work within the present system. I do believe that our intellectual property is novel, but time will tell. It will probably be many years before we'll know for sure.

Thomas Lord   [10.20.07 03:51 PM]


A unilateral sense of fairness towards inventors is absolutely not the purpose of the patent system. The idea of inalienable rights of authors and inventors was debated among the founders -- and rejected. In its place was put the public interest: exclusive rights for limited times are Congress' to dole out, but only in so far as everyone else wins as a result.

One of the ways that Congress has chosen to help everyone win is to legislate that patents can't be obtained for something "obvious". You can't use patents to prevent people from doing obvious things. In the most controversially conservative (pro-patent holder) court decisions, obviousness is defined as a very clear kind of pointing to the invention by prior teaching.

Twine looks beautiful and I hope you achieve some success as a business but on the question of patents, I'm sorry, but you've got nothing that isn't clearly in prior teaching (and for that matter, practice). This isn't the right forum and it wouldn't be fair to ask you to try the case for your patents on blogs but -- c'mon -- you're doing the very standard thing of logically imposing automatic mark-up based on NLP and you're presenting that data using the very common technique of fuzzy hypertext associating, presumably with hooks for users to go in and nail down things that the system can't puzzle out automagically. Everyone and their brother started building systems like this at about the same time and on the basis of prior teaching. You're crafting a business, not cashing in on patentable inventions.


Nova Spivack   [10.20.07 08:11 PM]

Thomas, with all due respect, you really have no direct information about what we are doing that is or is not novel, nor any knowledge of our platform or core technology. I agree with you, the basic concepts of NLP are prior art, and that is not something I would ever claim is novel. Fortunately, that is not what our IP is based on.

Thomas Lord   [10.21.07 01:14 PM]


There is no disrespect intended either way, I'm sure. And, really, from the public information, your app looks plausibly like a nice example of a nice new category. I think (on the basis of prior teaching) that this category is going to very rapidly evolve making it very hard to identify the winning lines of development today: you'll have to be agile!

Again, I think I should not try to provoke a detailed debate of your patent claims here but I hope you'll consider solidifying (or discarding) your claims by finding a good forum to expose them to early-stage public challenge. A reasonable conversation could be had about them without jeapordizing your legitimate claims. Such a discussion could, if it quickly debunks the claims, save you money. Such a discussion could, if it supports the claims, help to support the price of your licenses.


Adam Lindemann   [10.21.07 08:11 PM]

Hello Tim.

I would very much like to show you Imindi which we have been developing for two years and now IS live. It is a "Thought engine" whose purpose is to help people to collect, connect and expand their thoughts and to tie these thoughts to relevant information.

Have a look at my thought on Radar Networks on Imindi and you will see what I mean.

I would be happy to walk you through it.

john   [10.22.07 07:38 AM]

great discussion, i poked around the paptent database a few months back when radar networks was in the nytimes and found a patent that seems to make alot of claims, one weired thing was they added their business plan into the patent?

Everything you wanted to know about Radar Networks, Inc.
it's all in the patent application, including the business model:
article about Radar in NYtimes, john markoff:
Entrepreneurs See a Web Guided by Common Sense

Based on the patent:
- hosted sematic blogging service with email accounts for distilation of content all for $9.95
- also will be called,

This is Web 3.0 (?) email, blogging?


[0007] Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth herein.

[0008] In one aspect of the invention, a method of processing content created by a user utilizing a semantic, ontology-driven portal on a computer network is described. The semantic portal application provides the user with a content base, such as a semantic form or meta-form, for creating a semantic posting. The semantic portal utilizes a knowledge data structure, such as a taxonomy or ontology, in preparing a semantic posting based on the information provided by the user via the content base. The semantic portal application prepares a preview of a semantic posting for evaluation by the user. The semantic posting is then either modified by the user or accepted and posted by the user for external parties to view.

[0009] In another aspect of the invention, a method of sharing information on a semantic-based network is described. A semantic portal application intercepts an e-mail message via an outgoing mail server or an incoming mail server. It then creates a semantic object from the e-mail message by examining syntax of the e-mail message inline, for example by detecting brackets, colons, and certain keywords indicating data type. The semantic posting is then make viewable by the public.

[0295] In addition to the above, Hotnode does automatic social networking: Building a social network by simply analyzing the recipients on messages, and linking these together into a representation of the social network that they are a result of. This is a secondary viral feature: Belonging to such a network makes it more likely that a member will solicit his or her friends to join the network. Hotnode looks at the user's emails as they come through and builds a social network from them automatically, based on who they correspond with. The user can then search, visualize and communicate using this network. This is totally automatic--there is no need to send or reply to "join my network" messages anymore. Hotnode learns just by seeing who talks to who.

[0298] We are going to start by providing a the most powerful hosted blogging and wiki service on our platform, as well as a portal for our community of users. We provide semantic blogging, Website and Wiki services to individuals, groups, communities and organizations. We are the semantic equivelent of the Wikipedia or Everything2 (which we might consider acquiring or close collaboration with). Users get a basic blog and Wiki for free, without ads. We can also provide them with other semantic objects they can post to their blogs for various types of things. If a user accept ads then they get some additional free features. Users can opt-in to advanced features by subscribing to the Hotnode Pro service ($9.95/month) and paying a la carte fees for other optional additions.

Tim O'Reilly   [10.22.07 08:40 AM]

FWIW, on 0295, I'm pretty sure that the original incarnation of Lili Cheng's Wallop, from Microsoft Research (not the version they spun out, which had all the good bits taken out) would consist of prior art.

But the main thing that this information shows is just how little you can learn from patent applications. In the original vision, the patent "taught" someone how to do the invention. That was the trade for protection. Do today's patent applications teach you how to do anything? I don't think so. It's a sad testimony to the degradation of a once powerful idea.

Nova Spivack   [10.23.07 10:49 PM]

That is almost ancient stuff from a provisional filing that was actually based on very early research we did in 2001 - 2003. Our patent attorney at the time simply threw our raw notes into a provisional -- but we were not aware he had actually done that. We were quite unhappy about that work. It did cover some of our early ideas however, as you may know, all that really matters is what is in the claims.

Most of our new work is covered in more recent, much more professional filings, that are probably not a matter of public record yet.

Semantic Surfer   [10.29.07 09:36 AM]

Doesn't Radar's underlying technology come from another company? They don't talk much about that. Also, the entity extraction is so commoditized it's not worth describing. 100 products can do it for your app right now.

Nova Spivack   [10.31.07 01:04 AM]

In reference to the above comment: The vast majority of our technology was developed by us and is related to core capabilities of our semantic platform and the functionality of Twine. This includes our own proprietary auto-tagging capabilities, machine learning, mining, etc. However, the platform under Twine also enables us to connect to external applications and services as well, for example to add third-party capabilities to Twine. This enables us to, for example, add in third-party natural language processing, search, or reasoning capabilities to Twine, where it could be beneficial to users.

Andrew Clark   [11.03.07 02:00 PM]

I apologize for entering this discussion abruptly. I have been following this emergence closely since the Web2.0 summit. I am a developmental psychiatrist with a research interest in social networking platforms and the human potential that can be brought forth from the semantic web if its done right! I also have a great interest in risk behavior as this is my NIDA funded research with adolescent populations. Basically, I've started out on creating an evidence base for the risks and benefits of social networking among kids. I have a rich vision for the human interface to a semantic web which accounts for the subtler dynamic aspects of my field. I am formalizing these specific elements currently. The collective consciousness is emerging with the unconscious soon following suit. Please provide safety measures for individuals and society embedded within it. Boundaries, boundaries, boundaries, so neccessary yet they will soon be so more diffused than ever before with web3.0. I am interested in forming a consulting firm to this specific aim but am just getting it figured out. Please contact me for interest, discussion, or suggestions.

Andrew Clark   [11.03.07 02:02 PM]

I apologize for entering this discussion abruptly. I have been following this emergence closely since the Web2.0 summit. I am a developmental psychiatrist with a research interest in social networking platforms and the human potential that can be brought forth from the semantic web if its done right! I also have a great interest in risk behavior as this is my NIDA funded research with adolescent populations. Basically, I've started out on creating an evidence base for the risks and benefits of social networking among kids. I have a rich vision for the human interface to a semantic web which accounts for the subtler dynamic aspects of my field. I am formalizing these specific elements currently. The collective consciousness is emerging with the unconscious soon following suit. Please provide safety measures for individuals and society embedded within it. Boundaries, boundaries, boundaries, so neccessary yet they will soon be so more diffused than ever before with web3.0. I am interested in forming a consulting firm to this specific aim but am just getting it figured out. Please contact me for interest, discussion, or suggestions.

Post A Comment:

 (please be patient, comments may take awhile to post)

Remember Me?

Subscribe to this Site

Radar RSS feed