Patent application title: CONCEPT DISCOVERY FROM TEXT VIA KNOWLEDGE TRANSFER
Inventors:
IPC8 Class: AG06F1635FI
USPC Class:
1 1
Class name:
Publication date: 2021-05-13
Patent application number: 20210141823
Abstract:
Documents from a set of related documents in a domain are processed to
identify keywords associated with each document. The documents are then
further processed to identify the documents that are the most similar to
each other. For each document, some or all of the keywords that are
associated with the similar documents, but not the document itself, are
selected as semantic tags for the document. These semantic tags
determined for a document represent novel or hidden concepts and contexts
that may relate to the document, but that do not actually appear in the
document. The documents are used to train a model that generate semantic
tags for a document or for keywords associated with the document. The
generated model can then be used for a variety of purposes such the
creation of an index for a set of documents or for query expansion.Claims:
1. A method for determining semantic tags for a document, comprising:
receiving a set of documents by a computing device, wherein each document
in the set of documents comprises a first set of keywords; for each
document in the set of documents, determining one or more documents of
the set of documents that are similar to the document by the computing
device; and for each document in the set of documents, based on one or
more documents that are similar to the document, determining a second set
of keywords for the document by the computing device.
2. The method of claim 1, wherein the second set of keywords are semantic tags.
3. The method of claim 1 wherein, for each document, the second set of keywords is different than the first set of keywords.
4. The method of claim 1, further comprising training a model using the first set of keywords determined for each document.
5. The method of claim 4, further comprising: receiving a document, wherein the document is not in the first set of documents; and determining one or more semantic tags for the document using the model.
6. The method of claim 1, wherein each document of the set of documents comprises a plurality of terms, and further comprising, for each document of the set of documents, generating the first set of keywords by: computing a frequency for each term of the plurality of terms; and selecting the first set of keywords from the terms of the plurality of terms based on the computed frequencies.
7. The method of claim 6, wherein computing the frequency for a term comprises computing the term frequency-inverse document frequency ("TFIDF") for the term.
8. The method of claim 1, wherein for each document of the plurality of documents, determining the second set of keywords for the document comprises determining keywords from the first set of keywords associated with each of the one or more similar documents that are not in the first set of keywords associated with the document, and generating the second set of keywords based on the determined keywords.
9. The method of claim 1, wherein determining one or more documents of the set of documents that are similar to the document comprises determining the one or more documents using a cosine similarity-based function.
10. A system for determining semantic tags for a document, comprising: at least one computing device; and a memory storing instructions that when executed by the at least one computing device cause the at least one computing device to: receive a set of documents, wherein each document in the set of documents comprises a first set of keywords; for each document in the set of documents, determine one or more documents of the set of documents that are similar to the document; and for each document in the set of documents, based on one or more documents that are similar to the document, determine a second set of keywords for the document.
11. The system of claim 10, wherein the second set of keywords are semantic tags.
12. The system of claim 10, wherein, for each document, the second set of keywords is different than the first set of keywords.
13. The system of claim 10, further comprising instructions that when executed by the at least one computing device cause the at least one computing device to train a model using the first set of keywords determined for each document.
14. The system of claim 13, further comprising instructions that when executed by the at least one computing device cause the at least one computing device to: receive a document, wherein the document is not in the first set of documents; and determine one or more semantic tags for the document using the model.
15. A computer-readable medium storing instructions that when executed by at least one computing device cause the at least one computing device to: receive a set of documents, wherein each document in the set of documents comprises a first set of keywords; for each document in the set of documents, determine one or more documents of the set of documents that are similar to the document; and for each document in the set of documents, based on one or more documents that are similar to the document, determine a second set of keywords for the document.
16. The computer-readable medium of claim 15, wherein the second set of keywords are semantic tags.
17. The computer-readable medium of claim 15, wherein, for each document, the second set of keywords is different than the first set of key words.
18. The computer-readable medium of claim 15, further comprising instructions that when executed by the at least one computing device cause the at least one computing device to train a model using the first set of keywords determined for each document.
19. The computer-readable medium of claim 18, further comprising instructions that when executed by the at least one computing device cause the at least one computing device to: receive a document, wherein the document is not in the first set of documents; and determine one or more semantic tags for the document using the model.
20. The computer-readable medium of claim 15, wherein determining one or more documents of the set of documents that are similar to the document comprises determining the one or more documents using a cosine similarity-based function.
Description:
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. provisional patent application No. 62/931,843, filed on Nov. 7, 2019, and entitled "CONCEPT DISCOVERY FROM TEXT VIA KNOWLEDGE TRANSFER," the disclosure of which is expressly incorporated herein by reference in its entirety.
BACKGROUND
[0002] Traditional knowledge graphs driven by knowledge bases can represent facts about and capture relationships among entities very well, thus performing quite accurately in fact-based information retrieval or question answering. However, novel contexts consisting of a new set of terms referring to one or more concepts, may appear in a real-world querying scenario in the form of a natural language question or a search query into a document retrieval system. These may not directly refer to existing entities or surface form concepts occurring in the relations within a knowledge base. Thus, in addressing these novel contexts, such as those appearing in nuanced subjective queries, these systems can fall short. This is because hidden relations meaningful in the current context may exist in a collection between candidate latent concepts or entities that have different surface realizations via alternate lexical forms, but which are not currently present in a curated knowledge source such as a knowledge base or an ontology.
[0003] It is with respect to these and other considerations that the various aspects and embodiments of the present disclosure are presented.
SUMMARY
[0004] Documents from a set of related documents in a domain are processed to identify keywords associated with each document. The documents are then further processed to identify the documents that are the most similar to each other. For each document, some or all of the keywords that are associated with the similar documents, but not the document itself, are selected as semantic tags for the document. These semantic tags determined for a document represent novel or hidden concepts and contexts that may relate to the document, but that do not actually appear in the document. The semantic tags and the documents are used to train a model that generates semantic tags for a document or for keywords associated with the document. The generated model can then be used for a variety of purposes such the creation of an index for a set of documents or for query expansion.
[0005] In an embodiment, a method for determining semantic tags for a document is provided. The method includes: receiving a set of documents by a computing device, wherein each document in the set of documents comprises a first set of keywords; for each document in the set of documents, determining one or more documents of the set of documents that are similar to the document by the computing device; and for each document in the set of documents, based on one or more documents that are similar to the document, determining a second set of keywords for the document by the computing device.
[0006] Embodiments may have some or all of the following features. The second set of keywords may be semantic tags. For each document, the second set of keywords may be different than the first set of keywords. The method may further include training a model using the first set of keywords determined for each document. The method may further include: receiving a document, wherein the document is not in the first set of documents; and determining one or more semantic tags for the document using the model. Each document of the set of documents may include a plurality of terms. The method may further include, for each document of the set of documents, generating the first set of keywords by: computing a frequency for each term of the plurality of terms; and selecting the first set of keywords from the terms of the plurality of terms based on the computed frequencies. Computing the frequency for a term may include computing the term frequency-inverse document frequency ("TFIDF") for the term. For each document of the plurality of documents, determining the second set of keywords for the document may include: determining keywords from the first set of keywords associated with each of the one or more similar documents that are not in the first set of keywords associated with the document, and generating the second set of keywords based on the determined keywords. Determining one or more documents of the set of documents that are similar to the document may include determining the one or more documents using a cosine similarity-based function.
[0007] In an embodiment, a system for determining semantic tags for a document is provided. The system includes: at least one computing device; and a memory storing instructions that when executed by the at least one computing device cause the at least one computing device to: receive a set of documents, wherein each document in the set of documents comprises a first set of keywords; for each document in the set of documents, determine one or more documents of the set of documents that are similar to the document; and for each document in the set of documents, based on one or more documents that are similar to the document, determine a second set of keywords for the document.
[0008] Embodiments may include some or all of the following features. The second set of keywords may be semantic tags. For each document, the second set of keywords may be different than the first set of keywords. The instructions may further include instructions that when executed by the at least one computing device cause the at least one computing device to train a model using the first set of keywords determined for each document. The instructions may further include instructions that when executed by the at least one computing device cause the at least one computing device to: receive a document, wherein the document is not in the first set of documents; and determine one or more semantic tags for the document using the model.
[0009] In an embodiment, a computer-readable medium is provided. The computer-readable medium may store instructions that when executed by at least one computing device cause the at least one computing device to: receive a set of documents, wherein each document in the set of documents comprises a first set of keywords; for each document in the set of documents, determine one or more documents of the set of documents that are similar to the document; and for each document in the set of documents, based on one or more documents that are similar to the document, determine a second set of keywords for the document.
[0010] Embodiments may include some or all of the following features. The second set of keywords may be semantic tags. For each document, the second set of keywords may be different than the first set of keywords. The instructions may include instructions that when executed by the at least one computing device cause the at least one computing device to train a model using the first set of keywords determined for each document. The instructions may include instructions that when executed by the at least one computing device cause the at least one computing device to: receive a document, wherein the document is not in the first set of documents; and determine one or more semantic tags for the document using the model. Determining one or more documents of the set of documents that are similar to the document may include determining the one or more documents using a cosine similarity-based function.
[0011] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The foregoing summary, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the embodiments, there is shown in the drawings example constructions of the embodiments; however, the embodiments are not limited to the specific methods and instrumentalities disclosed. In the drawings:
[0013] FIG. 1 is an illustration of an environment for an example semantic engine for determining semantic tags for one or more documents;
[0014] FIG. 2 is an illustration of an example method for training a model to generate semantic tags for documents;
[0015] FIG. 3 is an illustration of an example method for using a semantic model to expand a query; and
[0016] FIG. 4 shows an exemplary computing environment in which example embodiments and aspects may be implemented.
DETAILED DESCRIPTION
[0017] This description provides examples not intended to limit the scope of the appended claims. The figures generally indicate the features of the examples, where it is understood and appreciated that like reference numerals are used to refer to like elements. Reference in the specification to "one embodiment" or "an embodiment" or "an example embodiment" means that a particular feature, structure, or characteristic described is included in at least one embodiment described herein and does not imply that the feature, structure, or characteristic is present in all embodiments described herein.
[0018] FIG. 1 is an illustration of an environment 100 for an example semantic engine 110 for determining semantic tags 127 for one or more documents, such as the documents 105 in a set of documents 107, or one or more other documents 130. The semantic tags 127 determined for each document (e.g., of the documents 105 and/or the document(s) 130) may represent hidden or latent concepts and contexts that are related to the document, but that do not appear in the document itself. The semantic tags 127 determined for each document may be used for a variety of purposes such as index generation and query expansion.
[0019] As shown, the semantic engine 110 may include several components including a training engine 115 and a tag engine 125. More or fewer components may be supported. The semantic engine 110, including the training engine 115 and the tag engine 125, may be implemented together or separately using one or more general purpose computing devices such as the computing device 400 illustrated with respect to FIG. 4.
[0020] In general, a document (such as a document 105 in the set of documents 107, or a document 130) may include a plurality of terms (e.g., words and phrases). The document(s) may include webpages, papers, publications, and queries. Other types of documents may be supported.
[0021] The training engine 115 receives the set of documents 107 and, based on the documents 105 in the set of documents 107, generates a semantic model 117 that may be used to determine semantic tags 127 for the other document(s) 130 or for keywords associated with the other document(s) 130. The other document(s) 130 are not the same as one or more of the documents 105 in the set of documents 107 used to train the semantic model 117.
[0022] The training engine 115 may generate the semantic model 117 from the set of documents 107. The documents 105 in the set of documents 107 may be related to the same general topic or field. For example, the documents 105 in the set of documents 107 may be research papers in the field of evidence-based medicine. In another example, the documents 105 in the set of documents 107 may be movie reviews. Any document topic, field, or domain may be supported.
[0023] The training engine 115 may generate the semantic model 117 using training data. The training data may include the documents 105 of the set of documents 107, and one or more labels. The labels may be semantic tags 127 determined for some or all of the documents 105.
[0024] In some embodiments, the semantic model 117 may be trained by the training engine 115 using a variety of methods and techniques including, but not limited to doc.sub.2vec, Deep Averaging, sequential models such as Long short-term memory ("LSTM"), gated recurring units ("GRU"), bidirectional GRU ("BiGRU"), and bidirectional LSTM ("BiLSTM"), with attention and self-attention. Other methods for training models may be used.
[0025] The training process may include two phases. A first phase for generating keywords for documents 105 for the training of input representations, and the second phase for inference to achieve term transfer for generating the semantic tags 127 for each document 105 in the set of documents 107.
[0026] As part of the first phase, the training engine 115 may generate a set of k keywords for each query document 105 d.sub.q in the set of documents D={d.sub.1, d.sub.2, d.sub.n}. The keywords generated for a document 105 may be the most relevant terms from the plurality of terms that are included in the document 105. In some embodiments, the keywords may be selected for a document 105 using term scoring methods such as term frequency-inverse document frequency ("TFIDF"). In TFIDF, each term in a document 105 receives a score that indicates its relevance or importance to the document 105. The k terms of the document 105 with the highest scores may be selected as the keywords for the document 105. The number of keywords k determined for each document 105 may be set by a user or administrator.
[0027] In some embodiments, the training engine 115 may first learn the appropriate feature representations (i.e., keywords) of the documents 105 in the set of documents 107 in the first phase of training, by taking in the tokens (i.e., terms) of an input document 105 sequentially, using a document's pre-determined top k TFIDF-scored terms as pseudo-class labels for an input instance (i.e., prediction targets for a sigmoid layer for multi-label classification). The training objective is to maximize probability for these k terms, or y.sub.p=(t.sub.1, t.sub.1, . . . t.sub.k).di-elect cons.V using equation 1:
arg max.sub..theta.P(y.sub.p=(t.sub.1,t.sub.1, . . . t.sub.k).di-elect cons.V|v;.theta.) (1)
[0028] In the equation 1, V may be the list of the top 10,000 TFIDF-scored terms of the corpus of terms in the set of documents 107, v may be the TFID-scored terms associated with a document 105 of the set of documents 107, and t may be a term from a document 105. The training engine 115 may train the semantic model 117 with a label vector including the top 10,000 TFIDF-scored terms as targets for a sigmoid classification layer, employing a couple of alternative training objectives. Other size vectors may be used depending on the number of documents 105 in the set of documents 107.
[0029] The first training objective used by the training engine 115 may be to minimize a categorical cross-entropy loss for a single training instance with ground-truth label set using the following equation 2:
L.sub.CE(y.sub.p)=.SIGMA..sub.i=1.sup.|V|y.sub.p log(y.sub.i) (2)
[0030] In order to predict semantic tags 127 for a document 130, the training engine 115 may further use a language model-based loss objective to convert the decoder to a neural language model. According, the training engine 115 may use a training objective that maximizes the conditional log likelihood of the label terms L.sub.d of a document d.sub.q representation in v, i.e., P(L.sub.d|d.sub.q) where y.sub.p=L.sub.d.di-elect cons.V. This amounts to minimizing the negative log likelihood of the label representations conditioned on the document encoding as shown in equation 3:
P(L.sub.d|d.sub.q)=.PI..sub.l.di-elect cons.L.sub.dP(l|d.sub.q)=-.SIGMA..sub.l.di-elect cons.L.sub.d log P(l|d.sub.q) (3)
[0031] Because P(l|d.sub.q).varies.exp (v.sub.lv) where v.sub.l and v are the label and document encodings, equation 3 is equivalent to minimizing equation 4:
L.sub.LM(y.sub.p)=-.SIGMA..sub.l.di-elect cons.L.sub.d log(exp(v.sub.lv)) (4)
[0032] The training engine 115 may train the semantic model 117 using the set of documents 107 and the equations 2 and 4 described above. Alternatively, or additionally, the training engine 115 may train the semantic model 117 using a summation of both equations and a hyper-parameter a that is used to tune the language model component of the total loss objective. Other methods for training a model may be used.
[0033] The tag engine 125 may generate one or more semantic tags 127 for a document, such as one of the document(s) 130. The tag engine 125 may receive the document 130 and may use the semantic model 117 to generate the one or more semantic tags 127. The document 130 may be related to the set of documents 107 that was used to train the semantic model 117. For example, if the set of documents 107 were journal articles in a topic such as physics, the document 130 may also be a journal article in the topic of physics.
[0034] In some embodiments, the tag engine 125 may generate semantic tags 127 for a document 130 using the semantic model 117. In particular, the tag engine 125 may generate the semantic tags 127 for the document 130 without first determining any keywords.
[0035] The semantic engine 110 and semantic tags 127 as described herein can be used for a variety of applications. Once such application is query expansion. A particular set of documents 107 is used to train a semantic model 117 as described above. When a query is received by a search engine associated with the set of documents 107 from a user, the terms of the query are treated as document keywords and are used by the semantic model 117 to generate one or more semantic tags for some or all of the terms of the query. The query is then expanded by adding the semantic tags 127 to the original terms of the query. The expanded query is used by the search engine to search the set of documents 107. As may be appreciated, this is an improvement to prior art searching methods because it is not necessary for the user to understand all of the terms of art or specific terms used in the set of documents 107 when formulating their initial query.
[0036] Another application for the semantic engine 110 is generating an index for a set of documents 107. An index may be initially created for the documents 105 of a set of documents 107. The index may include an entry for each keyword along with a link or reference to each document 105 that is associated with the keyword. After the index is created, the semantic model 117 may be used to determine the semantic tags associated with each document 105. The determined semantic tags 127, and references to their associated documents 105, may be added to the index. Where the semantic tags 127 match one or more of the keywords already in the index, references to the documents 105 associated with the semantic tags 127 may be added to the existing entries of the matching keywords.
[0037] FIG. 2 is an illustration of an example method 200 for training a model to determine semantic tags for documents. The method 200 may be implemented by the semantic engine 110, for example.
[0038] At 210, a set of documents is received. The set of documents 107 may be received by the training engine 115 of the semantic engine 110. The documents 105 in the set of documents 107 may be related documents 105. For example, the set of documents 107 may include documents 105 such as medical research papers, political articles, legal documents, or messages in a social networking application. Other types of documents 105 may be supported.
[0039] At 220, for each document in the set of documents, a first set of keywords is determined. The first set of keywords for each document 105 in the set of documents 107 may be determined by the training engine 115. The first set of keywords determined for a document 105 may be one or more terms from the document 105 that relate to the topic and/or main point of the document 105.
[0040] In some embodiments, the training engine 115 may determine the first set of keywords for a document 105 by scoring each term in the document 105 and selecting the highest scoring terms as the keywords for the document 105. The score for each term may be calculated using a scoring function such as TFIDF. Other scoring functions may be used. Alternatively, the keywords in the first set of keywords may be determined by a reviewer or may have been provided by an author of the document 105.
[0041] At 230, a model is trained using the first keywords determined for each document. The model may be the semantic model 117 and may be trained by the training engine 115. Depending on the embodiment, the semantic model 117 may be a neural language model. Other types of models may be supported. The semantic model 117 may be adapted to receive a document 130 (i.e., a document that may not have been in the set of document 107 used to train the model 117) and to output a set of semantic tags 127 for the document 130. The semantic tags 127 may be terms that do not necessarily appear in the document 130 (or keywords associated with the document 130), but that have been determined to be relevant to the document 130.
[0042] At 240, for each document in the set of documents, one or more similar documents are determined. The similar documents 105 may be determined by the training engine 115 using trained document representations. In some embodiments, the training engine 115 may determine documents 105 from the set of documents 107 that are similar to a particular document 105 by calculating the similarity of the particular document 105 to each of the other documents 105 in the set of documents 107. The top k most similar documents 105 may be selected as the one or more similar documents 105. The size of k may be set by a user or administrator. The similarly of documents 105 may be calculated using a cosine similarity function. Other similarity functions may be used. Alternatively, the similar documents 105 may be identified by a reviewer or administrator.
[0043] At 250, for each document in the set of documents, based on the determined one more similar documents, a second set of keywords is determined. The tag engine 125 may determine the second set of keywords for a document 105 from the first set of keywords associated with each of the documents 105 that were determined to be similar to the document 105. The terms in the second set of keywords are the semantic tags 127 for the document 105. Generally, for each document 105, the terms in the second set of keywords are different than the terms in the first set of keywords for the document 105.
[0044] FIG. 3 is an illustration of an example method 300 for expanding a query using semantic tags. The method 300 may be implemented by the tag engine 125 of the semantic engine 110.
[0045] At 310, a query is received. The query may be received by the tag engine 125 of the semantic engine 110. The query may have been provided by a user searching for a document 105 that matches the query. The query may include one or more terms (e.g., words or phrases) that that the user believes will match one or relevant documents 105.
[0046] At 320, one or more semantic tags are determined for the query. The one or more semantic tags 127 may be determined by the tag engine 125 using the semantic model 117. In some embodiments, the semantic model 117 may have been trained using keywords associated with one or more documents 105 in a particular field, topic, interest, or domain. The query may be for documents 105 in the same field that was used to train the model 117.
[0047] The semantic tags 127 determined by the model 117 may be one or more terms that, while they did not appear in the query, are likely relevant to the terms of the query. For example, the semantic tags 127 may include "terms of art" or new terms that are being used in the field or topic associated with the query that the user may not be aware of.
[0048] At 330, the query is expanded using the determined semantic tags. The query may be expanded by the tag engine 125. The query may be expanded by adding the semantic tags to the query. Depending on the embodiment, each semantic tag may correspond to one or more of the terms of the original query. The tag engine 125 may then expand the query by adding each semantic tag 127 to its corresponding term of the query along with an "OR" operator so that either the original term of the query or its corresponding semantic tag 127 may match a document 105. Other methods for expanding a query may be used.
[0049] At 340, a document corpus is searched using the expanded query. The document corpus may be searched by the tag engine 125 for documents 105 that are responsive to the expanded query. Depending on the embodiment, a document 105 in the corpus may be responsive to the expanded query if it includes any of the terms of the original query or the semantic tags 127, or if it includes a particular combination of terms and semantic tags 127 defined by one or more operators (e.g., Boolean operators) in the expanded query.
[0050] At 350, indicators of documents that are responsive to the expanded query are provided. The indicators may be provided to the user that provided the original query by the tag engine 125. In some embodiments, the indicators may be provided along with the original received query and the expanded query that was used to search the document corpus.
[0051] FIG. 4 shows an exemplary computing environment in which example embodiments and aspects may be implemented. The computing device environment is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality.
[0052] Numerous other general purpose or special purpose computing devices environments or configurations may be used. Examples of well-known computing devices, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like.
[0053] Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices.
[0054] With reference to FIG. 4, an exemplary system for implementing aspects described herein includes a computing device, such as computing device 400. In its most basic configuration, computing device 400 typically includes at least one processing unit 402 and memory 404. Depending on the exact configuration and type of computing device, memory 404 may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.), or some combination of the two. This most basic configuration is illustrated in FIG. 4 by dashed line 406.
[0055] Computing device 400 may have additional features/functionality. For example, computing device 400 may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG. 4 by removable storage 408 and non-removable storage 410.
[0056] Computing device 400 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by the device 400 and includes both volatile and non-volatile media, removable and non-removable media.
[0057] Computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory 404, removable storage 408, and non-removable storage 410 are all examples of computer storage media. Computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 400. Any such computer storage media may be part of computing device 400.
[0058] Computing device 400 may contain communication connection(s) 412 that allow the device to communicate with other devices. Computing device 400 may also have input device(s) 414 such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 416 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here.
[0059] It should be understood that the various techniques described herein may be implemented in connection with hardware components or software components or, where appropriate, with a combination of both. Illustrative types of hardware components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium where, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter.
[0060] As used herein, the terms "can," "may," "optionally," "can optionally," and "may optionally" are used interchangeably and are meant to include cases in which the condition occurs as well as cases in which the condition does not occur.
[0061] Ranges can be expressed herein as from "about" one particular value, and/or to "about" another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent "about," it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint. It is also understood that there are a number of values disclosed herein, and that each value is also herein disclosed as "about" that particular value in addition to the value itself. For example, if the value "10" is disclosed, then "about 10" is also disclosed.
[0062] It should be understood that the various techniques described herein may be implemented in connection with hardware components or software components or, where appropriate, with a combination of both. Illustrative types of hardware components that can be used include Field-Programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium where, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter.
[0063] Although exemplary implementations may refer to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. Such devices might include personal computers, network servers, and handheld devices, for example.
[0064] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
User Contributions:
Comment about this patent or add new information about this topic: