A powerful tool for processing video content into social media-friendly segments
Project description
Clipify
An AI-powered video processing toolkit for creating social media-optimized content with automated transcription, captioning, and thematic segmentation.
๐ Key Features
Content Processing
- Video Processing Pipeline
- Automated audio extraction and speech-to-text conversion
- Smart thematic segmentation using AI
- Mobile-optimized format conversion (9:16, 4:5, 1:1)
- Intelligent caption generation and overlay
AI Capabilities
- Advanced Analysis
- Context-aware content segmentation
- Dynamic title generation
- Smart keyword and hashtag extraction
- Sentiment analysis for content optimization
Platform Options
-
Desktop Application
- Intuitive graphical interface
- Drag-and-drop functionality
- Real-time processing feedback
- Batch processing capabilities
-
Server Deployment
- RESTful API integration
- Asynchronous processing with webhooks
- Multi-tenant architecture
- Containerized deployment support
๐ Quick Start
Desktop Application
๐ Check out our full project based on Clipify on https://github.com/adelelawady/Clipify-hub ๐
Download and install the latest version:
Python Package Installation
# Via pip
pip install clipify
# From source
git clone https://github.com/adelelawady/Clipify.git
cd Clipify
pip install -r requirements.txt
๐ป Usage Examples
Basic Implementation
from clipify.core.clipify import Clipify
# Initialize with basic configuration
clipify = Clipify(
provider_name="hyperbolic",
api_key="your-api-key",
model="deepseek-ai/DeepSeek-V3",
convert_to_mobile=True,
add_captions=True
)
# Process video
result = clipify.process_video("input.mp4")
# Handle results
if result:
print(f"Created {len(result['segments'])} segments")
for segment in result['segments']:
print(f"Segment {segment['segment_number']}: {segment['title']}")
Advanced Configuration
clipify = Clipify(
# AI Configuration
provider_name="hyperbolic",
api_key="your-api-key",
model="deepseek-ai/DeepSeek-V3",
max_tokens=5048,
temperature=0.7,
# Video Processing
convert_to_mobile=True,
add_captions=True,
mobile_ratio="9:16",
# Caption Styling
caption_options={
"font": "Bangers-Regular.ttf",
"font_size": 60,
"font_color": "white",
"stroke_width": 2,
"stroke_color": "black",
"highlight_current_word": True,
"word_highlight_color": "red",
"shadow_strength": 0.8,
"shadow_blur": 0.08,
"line_count": 1,
"padding": 50,
"position": "bottom"
}
)
AudioExtractor
from clipify.audio.extractor import AudioExtractor
# Initialize audio extractor
extractor = AudioExtractor()
# Extract audio from video
audio_path = extractor.extract_audio(
video_path="input_video.mp4",
output_path="extracted_audio.wav"
)
if audio_path:
print(f"Audio successfully extracted to: {audio_path}")
SpeechToText
from clipify.audio.speech import SpeechToText
# Initialize speech to text converter
converter = SpeechToText(model_size="base") # Options: tiny, base, small, medium, large
# Convert audio to text with timing
result = converter.convert_to_text("audio_file.wav")
if result:
print("Transcript:", result['text'])
print("\nWord Timings:")
for word in result['word_timings'][:5]: # Show first 5 words
print(f"Word: {word['text']}")
print(f"Time: {word['start']:.2f}s - {word['end']:.2f}s")
VideoConverter
from clipify.video.converter import VideoConverter
# Initialize video converter
converter = VideoConverter()
# Convert video to mobile format with blurred background
result = converter.convert_to_mobile(
input_video="landscape_video.mp4",
output_video="mobile_video.mp4",
target_ratio="9:16" # Options: "1:1", "4:5", "9:16"
)
if result:
print("Video successfully converted to mobile format")
VideoConverterStretch
from clipify.video.converterStretch import VideoConverterStretch
# Initialize stretch converter
stretch_converter = VideoConverterStretch()
# Convert video using stretch method
result = stretch_converter.convert_to_mobile(
input_video="landscape.mp4",
output_video="stretched.mp4",
target_ratio="4:5" # Options: "1:1", "4:5", "9:16"
)
if result:
print("Video successfully converted using stretch method")
VideoProcessor
from clipify.video.processor import VideoProcessor
# Initialize video processor with caption styling
processor = VideoProcessor(
# Font settings
font="Bangers-Regular.ttf",
font_size=60,
font_color="white",
# Text effects
stroke_width=2,
stroke_color="black",
shadow_strength=0.8,
shadow_blur=0.08,
# Caption behavior
highlight_current_word=True,
word_highlight_color="red",
line_count=1,
padding=50,
position="bottom" # Options: "bottom", "top", "center"
)
# Process video with captions
result = processor.process_video(
input_video="input_video.mp4",
output_video="captioned_output.mp4",
use_local_whisper="auto" # Options: "auto", True, False
)
if result:
print("Video successfully processed with captions")
# Process multiple video segments
segment_files = ["segment1.mp4", "segment2.mp4", "segment3.mp4"]
processed_segments = processor.process_video_segments(
segment_files=segment_files,
output_dir="processed_segments"
)
The VideoProcessor provides powerful captioning capabilities:
- Customizable font styling and text effects
- Word-level highlighting for better readability
- Shadow and stroke effects for visibility
- Automatic speech recognition using Whisper
- Support for batch processing multiple segments
VideoCutter
from clipify.video.cutter import VideoCutter
# Initialize video cutter
cutter = VideoCutter()
# Cut a specific segment
result = cutter.cut_video(
input_video="full_video.mp4",
output_video="segment.mp4",
start_time=30.5, # Start at 30.5 seconds
end_time=45.2 # End at 45.2 seconds
)
if result:
print("Video segment successfully cut")
SmartTextProcessor
from clipify.core.text_processor import SmartTextProcessor
from clipify.core.ai_providers import HyperbolicAI
# Initialize AI provider and text processor
ai_provider = HyperbolicAI(api_key="your_api_key")
processor = SmartTextProcessor(ai_provider)
# Process text content
text = "Your long text content here..."
segments = processor.segment_by_theme(text)
if segments:
for segment in segments['segments']:
print(f"\nTitle: {segment['title']}")
print(f"Keywords: {', '.join(segment['keywords'])}")
print(f"Content length: {len(segment['content'])} chars")
๐ฆ Project Structure
clipify/
โโโ clipify/
โ โโโ __init__.py # Package initialization and version
โ โโโ core/
โ โ โโโ __init__.py
โ โ โโโ clipify.py # Main Clipify class
โ โ โโโ processor.py # Content processing logic
โ โ โโโ text_processor.py # Text analysis and segmentation
โ โ โโโ ai_providers.py # AI provider implementations
โ โโโ video/
โ โ โโโ __init__.py
โ โ โโโ cutter.py # Video cutting functionality
โ โ โโโ converter.py # Mobile format conversion
โ โ โโโ converterStretch.py # Alternative conversion method
โ โ โโโ processor.py # Video processing and captions
โ โโโ audio/
โ โ โโโ __init__.py
โ โ โโโ extractor.py # Audio extraction from video
โ โ โโโ speech.py # Speech-to-text conversion
โ โโโ utils/ # Utility functions
โ โโโ __init__.py
โ โโโ helpers.py
โโโ .gitignore # Git ignore rules
โโโ LICENSE # MIT License
โโโ MANIFEST.in # Package manifest
โโโ README.md # Project documentation
โโโ requirements.txt # Dependencies
โโโ setup.py # Package setup
๐ ๏ธ Configuration Options
AI Providers
hyperbolic
: Default provider with DeepSeek-V3 modelopenai
: OpenAI GPT models supportanthropic
: Anthropic Claude modelsollama
: Local model deployment
Video Formats
- Aspect Ratios:
1:1
,4:5
,9:16
- Output Formats: MP4, MOV
- Quality Presets: Low, Medium, High
Caption Customization
- Font customization
- Color schemes
- Position options
- Animation effects
- Word highlighting
๐ค Contributing
We welcome contributions! Here's how you can help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Please read our Contributing Guidelines for details.
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Support
- Enterprise Support: Contact adel50ali5b@gmail.com
- Community Support: GitHub Issues
- Documentation: Wiki
๐ Acknowledgments
- FFmpeg for video processing
- OpenAI for AI capabilities
- PyTorch community
- All contributors and supporters
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file clipify-2.1.4.tar.gz
.
File metadata
- Download URL: clipify-2.1.4.tar.gz
- Upload date:
- Size: 25.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
8d1459ac4ea04f90f24ef1b10392564fecf8f095dc99832942380b08efaaf79b
|
|
MD5 |
8d1dd1ba004ada79b59eb64ac8e03bd3
|
|
BLAKE2b-256 |
af4d7a0295a9dca8360677f8f713840267019d0749be9ec44834a5f6e5d26639
|
File details
Details for the file clipify-2.1.4-py3-none-any.whl
.
File metadata
- Download URL: clipify-2.1.4-py3-none-any.whl
- Upload date:
- Size: 26.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
f45fa85d10dffe33779d33cbc9f6a5426b1bf71544bcc9674d6240c322b6702f
|
|
MD5 |
8c1f9178686c6587fe34f931da13782d
|
|
BLAKE2b-256 |
0eec6a93f842d64fe58595e9d2dab3e51ffe551b0abf8770bfa47510cc773061
|