Skip to main content

Posts

Showing posts with the label data-structures

Dojo require() and AMD (1.7)

I'm having a heckuva time transitioning to Dojo and the new AMD structure, and I'm really hoping someone can shed some light on the whole concept. I've been living on Google for the last few weeks trying to find information on not the usage, but the structure and design pattern trends in using this.

javascript data structures library

I'd like to ask for recommendation of JavaScript library/libraries that supply an implementation of some basic data structures such as a priority queue, map with arbitrary keys, tries, graphs, etc. along with some algorithms that operate on them. I'm mostly interested in: The set of features covered, Flexibility of the solution - this mostly applies to graphs. For example do I have to use a supplied graph implementation, Use of functional features of the language - again it sometimes gives greater flexibility, Performance of the implementation EDIT Ok, I'd like to point out that I'm aware that it's possible to implement using js the following data structures: A map, if key values are either strings or numbers, A set, (using a map implementation), A queue, although as was pointed out below, it's inefficient on some browsers, At the moment I'm mostly interested in priority queues (not to confuse with regular queues), graph

Java: Best way to store to an arbitrary index of an ArrayList

I know that I cannot store a value at an index of an ArrayList that hasn't been used yet, i.e. is less than the size. In other words, if myArrayList.size() is 5, then if I try to do myArrayList.set(10, "Hello World") I will get an out of bounds error. But my app needs this. Other than a loop of storing null in each of the intermediate slots, is there a more elegant way? It looks to me like: This behavior is the same in Vector If I need to be able to randomly access (i.e. element at pos X) then my choices are Vector and ArrayList. I could use a HashMap and use the index as a key but that's really inefficient. So what is the elegant solution to what looks like a common case. I must be missing something...

Find duplicates in large file - java

I have really large file with approximately 15 million entries. Each line in the file contains a single string (call it key). I need to find the duplicate entries in the file using java. I tried to use a hashmap and detect duplicate entries. Apparently that approach is throwing me a "java.lang.OutOfMemoryError: Java heap space" error. How can I solve this problem? I think I could increase the heap space and try it, but I wanted to know if there are better efficient solutions without having to tweak the heap space.