Creating our own factory method in Java

Creating our own factory method in Java

Any factory method is created as a method belonging to an interface or abstract class. Hence that method is implemented, in the implementation classes or sub classes as case may be.

What are factory methods ?

A factory method is a method that creates and returns an object to the class to which it belongs. A single factory  method replaces several constructors in the class by accepting different options from the user , while creating the object.

For example, to create a factory method getFees() that will give the fees details for a course in an engineering college, we need to perform the following steps :

1> create an interface or abstract class
interface Fees {
   double showFees();
}

2>  Implement the abstract , public methods of the above interface.
class CSE implements Fees {
public double showFees(){
return 120000; // assumed some constant figure
}
}
// their can be more implementation classes also.

3> Create a factory class that contains factory method by the name getFees(). Mostly factory methods are written as static methods.
class CourseFees{
public static Fees getFees(String course){
if(course.equalsIgnoreCase(“CSE”))
return new CSE();
else if(course.equalsIgnoreCase(“ECE”))
return new ECE();
else return null;
}
}

// getFees() method takes the coursename from the user and creates an object either to CSE class or ECE class depending on the user option.

4> Call the factory method like this :
Fees f = CourseFees.getFees(name);

// In the preceding code, an object of CSE class or ECE class is returned by getFees() method. Since CSE and ECE are the implementation classes of Fees interface, we can use Fees interface reference ‘f’  to refer to the objects of these classes. Hence, if we call f.showFees(), then the showFees() of that particular class either CSE or ECE will be executed and corresponding fees will be displayed.

// complete program : combining all 4 steps as above
import java.io.*;

interface Fees {
   double showFees();
}

class CSE implements Fees {
public double showFees(){
return 120000; // assumed some constant figure
}
}

class ECE implements Fees {
public double showFees(){
return 110000; // assumed some constant figure
}
}

class CourseFees{
public static Fees getFees(String course){
if(course.equalsIgnoreCase(“CSE”))
return new CSE();
else if(course.equalsIgnoreCase(“ECE”))
return new ECE();
else return null;
}
}

// using factory method
class Sctpl {
public static void main(String args[]) throws IOException {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
System.out.println(“Enter course name”);
String name = br.readLine();
Fees f = CourseFees.getFees(name);
System.out.println(“The fees is Rs “+ f.showFees());
}
}

Squeeze play: compression in video interfaces

In 2014 the Video Electronics Standards Association (VESA) introduced the 1.0 version of its Display Stream Compression (DSC) specification, the first standard system for compressing video specifically intended for use with hardwired display interfaces. The DSC standard was also endorsed by the MIPI Alliance, paving the way for widespread use in mobile devices and other applications beyond VESA’s original PC-centric focus.

Last year, version 1.2 was published, extending the feature set to include the 4:2:0 and 4:2:2, YCbCr formats commonly seen in digital television, and the group continues to develop and extend DSC’s capabilities and features.

But why the need for compression in the first place? Is it a good thing overall? Simply put, DSC’s adoption  is driven by the seemingly-insatiable appetite for more pixels, greater bit depth, and ever-increasing refresh rates. While the real need for some of these is debatable, there’s no argument that, especially in mobile devices, there’s a need to deliver high-quality, high-definition images while consuming the bare minimum of power. That leads to the need for compression.

A 1920 x 1080 image – considered just a moderate “resolution” these days – at a 60 Hz refresh rate and using 24-bit per pixel RGB encoding requires transmitting almost 3 gigabits of information every second between source and display, and that’s not even counting the inevitable overhead. Move up to “8K” video, as is coming to the market now, and that rate goes up geometrically. 48 billion bits of information need to move every second. That’s fast enough to fill a 1 TB drive in well under three minutes.

Leawo The move from 1080p to 4K, HDR, and even 8K content requires more and more data, increasing the necessity for compression to shrink file sizes.

Digital interface standards like DisplayPort and HDMI have done an admirable job of keeping up with this growing appetite for data capacity. DisplayPort 1.4 is capable of over 32 Gbits/sec., and future versions are expected to push that to 40 Gbits and higher. But these increases come at a price; all else being equal, faster transmission rates always take more power, on top of the generally higher power requirements of higher-resolution displays. Something has to give.

Compression is actually a pretty old idea, and it’s based on the fact that data (and especially image data) generally contains a lot of unnecessary information; there’s a high degree of redundancy.

Let’s say I point an HDTV camera at a uniformly white wall. It’s still sending out that three gigabits of data every second, even though you might as well be sending a simple “this frame is the same as the last one” message after the first one has been sent. Even within that first frame, if the picture is truly just a uniform white, you should be able to get away with sending just a single white pixel and then indicating, somehow, “don’t worry about anything else – they all look like that!” The overwhelming majority of that 3 Gbits/sec data torrent is wasted.

In mobile devices, compression standards give us the means for connecting high-res external displays— like VR headsets— without chewing through the battery or needing a huge connector.

In a perfect situation we could eliminate everything but that single pixel of information and still wind up with a picture that would be identical to the original: a perfectly uniform white screen. This would be a case of completely lossless compression — if  we can assume that “perfect” situation. What eliminating redundancy does, though, in addition to reducing the amount of data you need to transmit, is to make it all that much more important that the data you are sending gets through unchanged. In other words, you’ve made your video stream much more sensitive to noise. Imagine what happens if, in sending that one pixel’s worth of “white” that’s going to set the color for the whole screen, a burst of noise knocks out all the blue information. You wind up with red and green, but no blue, which turns our white screen yellow. Since we’ve stopped sending all those redundant frames, it stays that way until a change in the source image causes something new to be sent.

The goal is to come up with a compression system that is visually lossless

So compression, even “mathematically lossless” compression, can still have an impact on the image quality at the receiving end. The goal is to come up with a compression system that is visually lossless, meaning it results in images indistinguishable from the uncompressed video signal by any human viewer. Careful design of the compression system can enable this while still allowing a significant reduction in the amount of data sent.

Imagine that instead of a plain white image, we’re sending typical video; coverage of a baseball game, for instance. But instead of sending each pixel of every frame, we send every other pixel. Odd pixels on one frame, and even pixels on the next. I’ve just cut the data rate in half, but thanks to the redundancy of information across frames, and the fact that I’m still maintaining a 60 Hz rate, the viewer never sees the difference. The “missing” data is made up, too rapidly to be noticed. That’s not something that’s actually used in any compression standard, as far as I know, but it shows how a simple “visually lossless” compression scheme might work.

If you’re familiar with the history of video, that example may have sounded awfully familiar. It’s very close to interlaced transmission, which used in the original analog TV systems. Interlacing can be understood as a crude form of data compression. It’s not really going to be completely visually lossless; some visible artifacts would still be expected (especially when objects moving within the image). But even such a simple system would still give surprisingly good results while saving a lot of interface bandwidth.

Synopsys An example of how DSC and DSI interoperate on host and device sides, and sample compression rates with and without DSC.

VESA’s DSC specification is a good deal more sophisticated, and produces truly visually lossless results in a large number of tests. The system can provide compression on the order of 3:1, easily permitting “8K” video streams to even be carried over earlier versions of DisplayPort or HDMI. It does this via a relatively simple yet elegant algorithm that can be implemented in a minimum of additional circuitry, keeping the power load down to something easily handled in a mobile product — possibly even providing a net savings over running the interface at the full, uncompressed rate.

If you’re worried about any sort of compression still having a visible effect on your screen, consider the following. Over-the-air HDTV broadcasts are possible only because of the very high degree of compression that was built into the digital TV standard. Squeezing a full-HD broadcast, even one in which the source is an interlaced format like “1080i,” requires compression ratios on the order of 50:1 or more. The 1.5 Gbits per second of a 1080i, 60 Hz video stream had to be shoehorned into a 6 MHz channel (providing at best a little more than a 19 megabit-per-second capacity). HTDV broadcasts very typically work with less than a single bit per pixel in the final compressed data stream as it’s sent over the air, resulting in a clear, sharp HD image on your screen. When unusually high noise levels come up, the now-familiar blocky “compression artifacts” of digital TV pop up, but this really doesn’t happen all that often. Proprietary systems such as broadcast satellite or cable TV can use even heavier compression, and as a result show these sorts of problems much more frequently.

In the better-controlled environment of a wired digital interface, and with the much milder compression ratios of DSC, images transmitted using this system will probably be visually perfect. In mobile devices, compression standards such as these will give us the means for connecting high-res external displays— like VR headsets— without chewing through the battery or needing a huge connector.

You’ll very likely never even know it’s there.

Heard of CordoVa : “Hybrid App Dev !!”

Hybrid App development

A hybrid application (hybrid app) is one that combines elements of both native and Web applications. Native applications are developed for a specific platform and installed on a computing device. Web applications are generalized for multiple platforms and not installed locally but made available over the Internet through a browser.

Cordova

Apache Cordova is an open-source mobile development framework. It allows you to use standard web technologies – HTML5, CSS3, and JavaScript for cross-platform development. Applications execute within wrappers targeted to each platform, and rely on standards-compliant API bindings to access each device’s capabilities such as sensors, data, network status, etc.

cordova  image


Web applications cannot use native mobile functionalities by default. This is where Cordova is coming in. It offers a bridge for connection between web app and mobile device. By using cordova we can make hybrid mobile apps that can use camera, geolocation, file system and other native mobile functions.

Following are the  features of Cordova

Command Line Interface:- This tool allows you to create new projects, build them on different platforms, and run on real devices or within emulators.

Cordova Core Components:- Cordova provide core components that will be used for creating base of the app so we can spend more time to implement our own logic.

Cordova offers set of core components that every mobile application needs. These components will be used for creating base of the app so we can spend more time to implement our own logic.
Cordova Plugins:- All the main Cordova API features are implemented as plugins, and many others are available that enable features such as bar code scanners, NFC communication, or to tailor calendar interfaces.
Licence:- Cordova is licensed under the Apache License, Version 2.0.

Environment Setup

Lets see environment setup for cordova. Before start with the setup you need to install following components.
NodeJS and NPM:- NodeJS is the platform needed for Cordova development.
Android SDK:- You need Android SDK for Android platform.
XCode:- It is required for iOS platform.

Installing Cordova

In this installation we are using Windows Command prompt.

Step 1: Installing Git
Cordova need Git. It is using Git for some background processes.  After you install git. Follow following steps to setup Environment Variable.

  1. Right-Click on Computer
  2. Properties
  3. Advanced System settings
  4. Environment Variables
  5. System Variables
  6. Edit

Copy the following at the end of the variable value field. This is default path of the git installation. If you installed it on a different path you should use that instead of our example code below.

;C:Program Files (x86)Gitbin;C:Program Files (x86)Gitcmd
Now you can type git in your command prompt to test if the installation is successful.

Step 2: Installing Cordova

Open the command prompt and run the following command
C:Usersusername>npm install -g cordova 

You can check the installed version by running following command
C:Usersusername>cordova -v 
This is everything you need to start developing the Cordova apps on Windows operating system.

Lets create sample hybrid app using Cordova.

Creating App

Open the directory where you want the app to be installed in command prompt. We will create it on desktop.

C:UsersrockyDesktop>cordova 
create Myhybridapp io.cordova.hellocordova HybridApp

Myhybridapp is the directory name where the app is created.
io.cordova.hellocordova is the default reverse domain value. You should use your own domain value if possible.
HybridApp is the title of your app.

Adding Platforms

Open your project directory in the command prompt. In this example, it is the Myhybridapp. You should choose platforms that you need. To be able to use the specified platform, you need to have installed the specific platform SDK. Since we are developing on windows, we can use the following platforms. We have already installed Android SDK, so we will only install android platform for this example.
C:UsersrockyDesktopMyhybridapp>cordova platform add android 

There are other platforms that can be used on Windows OS.
C:UsersrockyDesktopMyhybridapp>cordova platform add wp8 
C:UsersrockyDesktopMyhybridapp>cordova platform add amazon-fireos 
C:UsersrockyDesktopMyhybridapp>cordova platform add windows 
C:UsersrockyDesktopMyhybridapp>cordova platform add blackberry10
C:UsersrockyDesktopMyhybridapp>cordova platform add firefoxos 

If you are developing on Mac, you can use :-
$ cordova platform add IOS 
$ cordova platform add amazon-fireos 
$ cordova platform add android 
$ cordova platform add blackberry10 
$ cordova platform add firefoxos 

You can also remove platform from your project by using :-
C:UsersrockyDesktopMyhybridapp>cordova platform rm android

Building and Running App

In this step we will build the app for a specified platform so we can run it on mobile device or emulator.
C:UsersrockyDesktopMyhybridapp>cordova build android 

Now we can run our app. If you are using the default emulator you should use 
C:UsersrockyDesktopMyhybridapp>cordova emulate android 

If you want to use the external emulator or real device you should use −
C:UsersrockyDesktopMyhybridapp>cordova run android 

We will use the Genymotion android emulator since it is faster and more responsive than the default one. You can find the emulator here. You can also use real device for testing by enabling USB debugging from the options and connecting it to your computer via USB cable. Once we run the app, it will install it on the platform we specified. If everything is finished without errors, the output should show the default start screen of the app.

cordova app default screen

Use Apache Cordova if you are:

  • A mobile developer and want to extend an application across more than one platform, without having to re-implement it with each platform’s language and tool set.
  • A web developer and want to deploy a web app that’s packaged for distribution in various app store portals.
  • A mobile developer interested in mixing native application components with a WebView (special browser window) that can access device-level APIs, or if you want to develop a plugin interface between native and WebView components.

Learn Android Programming?