HOME





Dan Willard
Dan Edward Willard (September 19, 1948 – January 21, 2023) was an American computer scientist and logician, and a professor of computer science at the University at Albany. Education and career Willard did his undergraduate studies in mathematics at Stony Brook University, graduating in 1970. He went on to graduate studies in mathematics at Harvard University, earning a master's degree in 1972 and a doctorate in 1978. After leaving Harvard, he worked at Bell Labs for four years before joining the Albany faculty in 1983.Curriculum vitae
, accessed 2013-06-04.


Contributions

Although trained as a mathematician and employed as a computer scientist, Willard's most highly cited publication is in

picture info

New York City
New York, often called New York City (NYC), is the most populous city in the United States, located at the southern tip of New York State on one of the world's largest natural harbors. The city comprises five boroughs, each coextensive with a respective county. The city is the geographical and demographic center of both the Northeast megalopolis and the New York metropolitan area, the largest metropolitan area in the United States by both population and urban area. New York is a global center of finance and commerce, culture, technology, entertainment and media, academics, and scientific output, the arts and fashion, and, as home to the headquarters of the United Nations, international diplomacy. With an estimated population in 2024 of 8,478,072 distributed over , the city is the most densely populated major city in the United States. New York City has more than double the population of Los Angeles, the nation's second-most populous city.
[...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Fractional Cascading
In computer science, fractional cascading is a technique to speed up a sequence of binary searches for the same value in a sequence of related data structures. The first binary search in the sequence takes a logarithmic amount of time, as is standard for binary searches, but successive searches in the sequence are faster. The original version of fractional cascading, introduced in two papers by Chazelle and Guibas in 1986 (; ), combined the idea of cascading, originating in range searching data structures of and , with the idea of fractional sampling, which originated in . Later authors introduced more complex forms of fractional cascading that allow the data structure to be maintained as the data changes by a sequence of discrete insertion and deletion events. Example As a simple example of fractional cascading, consider the following problem. We are given as input a collection of k ordered lists L_i of numbers, such that the total length \sum_i , L_i, of all lists is n, and mus ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Journal Of Computer And System Sciences
The ''Journal of Computer and System Sciences'' (JCSS) is a peer-reviewed scientific journal in the field of computer science. ''JCSS'' is published by Elsevier, and it was started in 1967. Many influential scientific articles have been published in ''JCSS''; these include five papers that have won the Gödel Prize The Gödel Prize is an annual prize for outstanding papers in the area of theoretical computer science, given jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery Special Inter .... Its managing editor is Michael Segal. Notes References * * External links * Journal homepageScienceDirect accessDBLP information Computer science journals Elsevier academic journals {{compu-journal-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Priority Queue
In computer science, a priority queue is an abstract data type similar to a regular queue (abstract data type), queue or stack (abstract data type), stack abstract data type. In a priority queue, each element has an associated ''priority'', which determines its order of service. Priority queue serves highest priority items first. Priority values have to be instances of an ordered data type, and higher priority can be given either to the lesser or to the greater values with respect to the given order relation. For example, in Java (programming language), Java standard library, ''PriorityQueues the least elements with respect to the order have the highest priority. This implementation detail is without much practical significance, since passing to the converse relation, opposite order relation turns the least values into the greatest, and vice versa. While priority queues are often implemented using Heap (data structure) , heaps, they are conceptually distinct. A priority queue can ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Fusion Tree
In computer science, a fusion tree is a type of tree data structure that implements an associative array on -bit integers on a finite universe, where each of the input integers has size less than 2w and is non-negative. When operating on a collection of Attribute–value pair, key–value pairs, it uses space and performs searches in time, which is asymptotically faster than a traditional self-balancing binary search tree, and also better than the van Emde Boas tree for large values of . It achieves this speed by using certain constant-time operations that can be done on a machine word. Fusion trees were invented in 1990 by Michael Fredman and Dan Willard. Several advances have been made since Fredman and Willard's original 1990 paper. In 1999. it was shown how to implement fusion trees under a model of computation in which all of the underlying operations of the algorithm belong to AC0, AC0, a model of circuit complexity that allows addition and bitwise Boolean operations ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Transdichotomous Model
In computational complexity theory, and more specifically in the analysis of algorithms with integer data, the transdichotomous model is a variation of the random-access machine in which the machine word size is assumed to match the problem size. The model was proposed by Michael Fredman and Dan Willard, who chose its name "because the dichotomy between the machine model and the problem size is crossed in a reasonable manner." In a problem such as integer sorting in which there are integers to be sorted, the transdichotomous model assumes that each integer may be stored in a single word of computer memory, that operations on single words take constant time per operation, and that the number of bits that can be stored in a single word is at least . The goal of complexity analysis in this model is to find time bounds that depend only on and not on the actual size of the input values or the machine words.. In modeling integer computation, it is necessary to assume that machine words ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Radix Sorting
In computer science, radix sort is a non-comparative sorting algorithm. It avoids comparison by creating and distributing elements into buckets according to their radix. For elements with more than one significant digit, this bucketing process is repeated for each digit, while preserving the ordering of the prior step, until all digits have been considered. For this reason, radix sort has also been called bucket sort and digital sort. Radix sort can be applied to data that can be sorted lexicographically, be they integers, words, punch cards, playing cards, or the mail. History Radix sort dates back as far as 1887 to the work of Herman Hollerith on tabulating machines. Radix sorting algorithms came into common use as a way to sort punched cards as early as 1923. Donald Knuth. ''The Art of Computer Programming'', Volume 3: ''Sorting and Searching'', Third Edition. Addison-Wesley, 1997. . Section 5.2.5: Sorting by Distribution, pp. 168–179. The first memory-efficient ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Comparison Sort
A comparison sort is a type of sorting algorithm that only reads the list elements through a single abstract comparison operation (often a "less than or equal to" operator or a three-way comparison) that determines which of two elements should occur first in the final sorted list. The only requirement is that the operator forms a total preorder over the data, with: # if ''a'' ≤ ''b'' and ''b'' ≤ ''c'' then ''a'' ≤ ''c'' (transitivity) # for all ''a'' and ''b'', ''a'' ≤ ''b'' or ''b'' ≤ ''a'' ( connexity). It is possible that both ''a'' ≤ ''b'' and ''b'' ≤ ''a''; in this case either may come first in the sorted list. In a stable sort, the input order determines the sorted order in this case. Comparison sorts studied in the literature are "comparison-based". Elements ''a'' and ''b'' can be swapped or otherwise re-arranged by the algorithm only when the order between these elements has been established based on the outcomes of prior comparisons. This is the case when ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Integer Sorting
In computer science, integer sorting is the algorithmic problem of sorting a collection of data values by integer keys. Algorithms designed for integer sorting may also often be applied to sorting problems in which the keys are floating point numbers, rational numbers, or text strings.. The ability to perform integer arithmetic on the keys allows integer sorting algorithms to be faster than comparison sorting algorithms in many cases, depending on the details of which operations are allowed in the model of computing and how large the integers to be sorted are. Integer sorting algorithms including pigeonhole sort, counting sort, and radix sort are widely used and practical. Other integer sorting algorithms with smaller worst-case time bounds are not believed to be practical for computer architectures with 64 or fewer bits per word. Many such algorithms are known, with performance depending on a combination of the number of items to be sorted, number of bits per key, and number of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Michael Fredman
Michael Lawrence Fredman is an emeritus professor at the Computer Science Department at Rutgers University, United States. He earned his Ph.D. from Stanford University in 1972 under the supervision of Donald Knuth. He was a member of the mathematics department at the Massachusetts Institute of Technology from 1974 to 1976. and of the Computer Science and Engineering department at the University of California, San Diego The University of California, San Diego (UC San Diego in communications material, formerly and colloquially UCSD) is a public university, public Land-grant university, land-grant research university in San Diego, California, United States. Es ... until 1992.UCSD Mathematics: Department History
. Among his contributions to computer science are the development of the
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Information Processing Letters
''Information Processing Letters'' is a peer review, peer-reviewed scientific journal in the field of computer science, published by Elsevier. The aim of the journal is to enable fast dissemination of results in the field of Data processing, information processing in the form of short papers. Submissions are limited to nine double-spaced pages. The scope of IPL covers fundamental aspects of information processing and computing. This naturally covers topics in the broadly understood field of theoretical computer science, including algorithms, formal languages and automata, computational complexity, computational logic, distributed and parallel algorithms, computational geometry, learning theory, computational number theory, computational biology, coding theory, theoretical cryptography, and applied discrete mathematics. Generally, submissions in all areas of scientific inquiry are considered, provided that they describe research contributions credibly motivated by applications to com ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Y-fast Trie
In computer science, a y-fast trie is a data structure for storing integers from a bounded domain. It supports exact and predecessor or successor queries in time ''O''(log log ''M''), using ''O''(''n'') space, where ''n'' is the number of stored values and ''M'' is the maximum value in the domain. The structure was proposed by Dan Willard in 1982 to decrease the ''O''(''n'' log ''M'') space used by an x-fast trie. Structure A y-fast trie consists of two data structures: the top half is an x-fast trie and the lower half consists of a number of balanced binary trees. The keys are divided into groups of ''O''(log ''M'') consecutive elements and for each group a balanced binary search tree is created. To facilitate efficient insertion and deletion, each group contains at least (log ''M'')/4 and at most 2 log ''M'' elements. For each balanced binary search tree a representative ''r'' is chosen. These representatives are stored in the x-fast ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]