This is my solution, after a few attempts trying to pass the last two test cases.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | import java.util.*; public class Contiguous { public static void main(String[] args) { Scanner in = new Scanner(System.in); Deque<Integer> deque = new ArrayDeque<Integer>(); int n = in.nextInt(); int m = in.nextInt(); int[] arr = new int[n]; int maxUnqs = 0; int unqs = 0; Map<Integer, Integer> map = new HashMap<Integer, Integer>(); for (int i = 0; i < n; i++) { int num = in.nextInt(); deque.add(num); //adds to tail //count occurrence of each number Integer c = map.get(num); if (c == null || c == 0) { map.put(num,1); unqs++; } else { map.put(num, c+1); } if (deque.size() == m) { //we have added m elements; go compute uniques if (unqs == m) { maxUnqs = m; //we can't do better break; } if (unqs > maxUnqs) { maxUnqs = unqs; } int first = deque.remove(); //we don't need the head any more; keep queue small int firstCount = map.get(first); if (firstCount == 1) { //the head had the only occurrence of the number, so uniques drop unqs--; } map.put(first, firstCount-1); //one less occurrence of the first element now } } System.out.format("%d\n", maxUnqs); } } |
What strikes me here is that a lot of different things are being done under a single loop. The code is therefore hard to read.
If the code is made to read well, we lose the speed.
The most readable code will do something like this:
1) Read the input into a collection (possibly an array)
2) Make sub arrays of size m, starting from position 0, ending at position n-m
3) For each sub array, count the uniques -> we could use a bitmap, a boolean array, or a hashmap for this
4) Keep track of the maximum unique count
However, this readable algorithm runs in O(nm) time, whereas the solution given above is O(n). The optimal solution keeps track of up to m integers read, and when the m+1 integer is read, it simply removes the effects of the first integer from the system. Thus we can go through each integer only once and solve the problem.
But this alone was not enough to pass the last two test cases. If we don't update <unqs> variable for each loop iteration, but compute it from the <map> each time we have a queue of m items, the code is still too slow to pass, although the time complexity is the same. While the time complexity is the same, we roughly double the work here and times out.
This slower solution would, on each mth item go over the map, counting keys that mapped to non-zero values. This would take m steps, but since we do this only n/m times, the time complexity would still be O(nm), but it would mean that for each n/m time slot, we do m+m = 2m work, effectively taxing the CPU twice.
Another solution I saw uses the map, but removes an element when its count drops to zero. Now we can use map.size() to calculate uniques. Since the map size can be read in constant time, this solution works as well to pass the tests.
All this led me to think about this Code Katta. The goals we set out will determine the kind of program we write.
Having said that, I believe it is possible to make this optimal program slightly more readable. If we encapsulate the state of the window of integers we are dealing with into a SlidingWindow class, we could implement four functions :
1 2 3 4 5 6 | class SlidingWindow { void addNumberToWindow(int num) boolean isFullWindow() void processFullWindow() int maxUnique() } |
This will at least break up the input processing logic from the state handling. Lines 16 - 24 would get folded into addNumberToWindow; Lines 26 - 40 would be inside processFullWindow.
No comments:
Post a Comment